Create a Robots.txt File

Reading Time: 2 minutesA web robot’s primary job is to scan websites and pages for information; they work tirelessly to collect data on behalf of search engines and other applications. For some, there is good reason to keep pages away from search engines.  Whether you want to fine-tune access to your site or want to work on a development site without showing up Google results, once implemented the robots.txt file lets web crawlers know which parts they can collect information. Continue reading “Create a Robots.txt File”