What's new

Creating the ultimate WordPress robots.txt file

gurusmaker

Active Techie
One of the things I?d been meaning to do for some time was to get round to creating a robots.txt file for this site. I?d looked at various sources of information including A Standard For Robot Exclusion and How to create a robots.txt file but was still unsure as to which bits should be excluded so, in my usual fashion, I didn?t bother doing anything at all!

However the subject has come up again a couple of times over the last few days at both Wolf-Howl and Connected Internet and pricked my conscience so I thought it was time to revisit it.

One of the main reasons for creating a robots.txt file is to prevent the search engines from reaching your content from more than one location (i.e. in your monthly archives, your category folders, your xml feed and on your front page) because this could lead to duplicate content issues. Other reasons are that you may have a private directory which you don?t want the world to read but that?s something for Big G to explain. Today we?re only looking at the SEO reasons for creating the file so with that in mind, what should be included?

There seems to be a number of different view points on what should and shouldn?t be included in the robots.txt file. Some say that you should include the wp-content, wp-admin, wp-includes and feed information whereas others say that they?re fine to index but don?t let the Googlebot anywhere near your archives. Another train of thought is to disallow access to your images folder whilst others warn you that your AdSense could go belly up by making sweeping changes to your robots file. In fact, no matter where you look, people are giving conflicting advice on the right way to create a robots.txt file for WordPress. No wonder I hadn?t done anything about it until now!

If you were creating a regular web site you?d include robot meta tags to prevent Google from indexing certain pages. However you don?t have this option within WordPress because all of the meta data is contained within only one file (header.php) which appears on every single WP page so any ?noindex,nofollow? rules would be applied across the whole site and you wouldn?t get indexed at all.

Taking it back to basics, the reason for creating the darned thing in the first place is to prevent Google potentially ditching your content into the supplemental index so what should you include in the file to prevent that from happening?

By including the bare minimum in your robots.txt file.

Why do I say that? Well because unless you know your WordPress install inside out you could end up shooting yourself in the foot. Every theme, plugin and tweak you?ve made to your site affects how your site is structured. If you change any of these parameters and don?t change your robots.txt file, you could end up seriously screwing yourself over.

A more sensible way of preventing duplicate content is by being more concise in the way that you structure your site. Michael Gray comes up with some excellent advice in his video blog about making WordPress search engine friendly and that is to only apply one category per post. I?d not even thought about that before but he?s right. Previously I was applying categories all over the shop but what that meant was that Google could find the same content in half a dozen different places so now I?m only using one category per post and will tidy up my archives shortly.

I?ve decided to implement a very basic version of the robots.txt file for the time being and will review the results in a few weeks time. I decided to keep the images folder accessible to Google because I do get some traffic through image search and whilst it?s not uber sticky, it?s still traffic at the end of the day. Equally I was specific about disallowing Googlebot to index /page/ but still allow the AdSense bot to look through archived pages.

I?m still confused as to whether to disallow access to the categories or dated archives to prevent possible issues. Looking at my Google data, I can?t work out which bit Big G doesn?t like so I?m going to implement the basic version of the robots.txt file first and then see what happens from there.
 
can you block crawler into Robot.txt forever in your website ???
 
Creating a robots.txt file for your WordPress website can help control which pages and content are crawled by search engine bots. Here are some tips for creating the best ultimate WordPress robots.txt file:

Start with a blank file: Begin by creating a new blank file in a text editor or using the file manager in your WordPress hosting account.

Define user-agent directives: User-agent directives specify which search engine bots are allowed to crawl your site. The most common user-agents are Googlebot, Bingbot, and other popular search engine bots. Here's an example of how to allow all user-agents:

User-agent: * Disallow:

Disallow specific files or directories: You can use the Disallow directive to prevent search engine bots from crawling specific files or directories on your site. For example, to disallow crawling of the wp-admin directory, you would add the following line:

Disallow: /wp-admin/

Allow specific files or directories: In some cases, you may want to allow search engine bots to crawl specific files or directories on your site. You can use the Allow directive to specify which content is allowed. For example, to allow crawling of the wp-content/uploads directory, you would add the following line:

Allow: /wp-content/uploads/
 
Last edited:
Top