Free Tool To Generate Robots.txt Instantly |

Search Engine Optimization

Robots.txt Generator

Default - All Robots are:  
Sitemap: (leave blank if you don't have) 
Search Robots: Google
  Google Image
  Google Mobile
  MSN Search
  Yahoo MM
  Yahoo Blogs
  DMOZ Checker
  MSN PicSearch
Restricted Directories: The path is relative to root and must contain a trailing slash "/"

Now, Create 'robots.txt' file at your root directory. Copy above text and paste into the text file.

About Robots.txt Generator

What is a robots.txt file?

Robots.txt is a text file that website owners create to instruct search engine robots to scan pages on their websites. The robots.txt file comes under the the robot's exclusion protocol (REP). REP is a group of web standards that operate how robots scan the web, access and index content, and serve that content to users. 

Practically, robots.txt files find whether certain user/ software can or cannot scans parts of a website. The instructions are specified by “disallowing” or “allowing” the behavior of user agents.


User-agent: [user-agent name]Disallow: [URL string not to be crawled]


Important Features of robots.txt File

The robots.txt file is publicly available. To see the robots.txt file of any website just add “/robots.txt” to the end of any root domain to see that website’s directives. Anyone can see what pages you do or don’t want to be visited, so don’t hide private user information in the robots.txt file. Every subdomain on a root domain uses separate robots.txt files. robots.txt is case sensitive file, which means, the file must be named “robots.txt” and not “Robots.txt”, “robots.TXT”, or otherwise. A robots.txt file must be placed in a website’s top-level directory to be found easily by users.

Why do you need robots.txt?

Robots.txt files control visiting access to certain areas of your site. While this can be very dangerous if you accidentally disallow Googlebot from visiting your entire site, there are some situations in which a robots.txt file can be very useful

Some common use cases include:

  • Robots.txt files help in preventing duplicate content from appearing in SERPs.
  • It also helps in keeping entire sections of a website private.
  • It keeps internal search results pages from showing up on a public SERP.
  • It also sets the location of the sitemap.
  • It prevents search engines from indexing certain files on your website.

How does robots.txt work?

Search engines have two main jobs:

  1. To visit and analyze the web to discover content
  2. To index that content so that it can be served up to users who are looking for information.

After arriving at a website, the search user looks for a robots.txt file. If it finds one, it will read that file first before continuing through the page. Because the robots.txt file contains insightful information about how the search engine should analyze the website, the information found there will instruct further action of the visitor on this particular site. If the site does not have a robots.txt file, it will proceed to search for other information on the site.


How can you create your first robots.txt file?

  1. First, allow or disallow the visitors the web access your website.  This menu allows you to decide whether you want your website to be visited. 
  2. Add your XML sitemap file by entering its location within this field.
  3. In the last text box, you are given the option to block certain pages or directories from being indexed by search engines. 
  4. When it is done, you can download your eobots.txt file.
  5. After generating your robots.txt file, upload it into the root directory of your domain.