That’s since Google can usually find and index all of the significant pages on your site.
And they’ll routinely NOT index pages that aren’t significant or duplicate versions of other pages.
That supposed, there are 3 main reasons that you’d want to use a robots.txt file.
Sometimes you consume pages on your website that you don’t poverty indexed. For instance, you might have a performance version of a page. Or a login page. These pages need to exist. But you don’t want chance people mooring on them. This is a case anywhere you’d use robots.txt to block these sheets from search engine crawlers and bots.
Maximize Crawl Budget: If you’re consuming a tough time receiving all of your pages indexed, you might have a crawl budget problematic. By blocking insignificant pages with robots.txt, Googlebot can devote additional of your crawl inexpensive on the pages that really material.
Prevent Indexing of Resources: Using meta orders can work just as well as Robots.txt for stopping pages from receiving indexed. However, meta orders don’t work well for hypermedia resources, like PDFs and images. That’s anywhere robots.txt comes into play.
The bottom line? Robots.txt tells search engine spiders not to crawl exact pages on your website.
If the number competitions the number of pages that you want indexed, you don’t need to trouble with a Robots.txt file.
But if that number is advanced than you predictable (and you notice indexed URLs that shouldn’t be indexed), then it’s time to make a robots.txt file for your website.
Comments