Jump to content

robots.txt? what is it? And why can't people find it?


Recommended Posts

Hi

If someone can help me out, that would be just wonderful.

I was looking through the pages that people have tried to access but they come across the 404 error.

One of the pages that they have tried to access is the robots.txt page. What is this page?

Please if some out there knows what it is, and how to fix it, please let me know.

Many thanks

Link to comment
Share on other sites

The Robot Exclusion Standard, also known as the Robots Exclusion Protocol or robots.txt protocol, is a convention to prevent cooperating web crawlers and other web robots from accessing all or part of a website which is otherwise publicly viewable. Robots are often used by search engines to categorize and archive web sites, or by webmasters to proofread source code. The standard is different from, but can be used in conjunction with, Sitemaps, a robot inclusion standard for websites.

 

more about robots.txt you can read here:

https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt

Link to comment
Share on other sites

it is not people trying to open the file, but crawlers like google robot.

This file is for search robots (crawlers), not for people

you have to create it manually in bo. more informations in the link that i pasted before.

Link to comment
Share on other sites

×
×
  • Create New...