The robot exclusion standard, also known as the Robots Exclusion Protocol or robots.txt protocol, is a convention to prevent cooperating web spiders and other web robots from accessing all or part of a website which is otherwise publicly viewable. Robots are often used by search engines to categorize and archive web sites, or by webmasters to proofread source code. The standard complements Sitemaps, a robot inclusion standard for websites.
Attributes | Values |
---|---|
rdfs:label |
|
rdfs:comment |
|
sameAs | |
dcterms:subject | |
dbkwik:freespeech/...iPageUsesTemplate | |
abstract |
|