robots.txt
file at the top of my wiki so that robots can only access view links.
Is there a best practice for preventing indexing/crawling of one or more webs?
Is that as simple as, e.g., adding
/bin/view/Systemto the top level
robots.txt
file, or is there something else I need to do?
If it makes a difference, I've got short URLs enabled via Apache.
-- VickiBrown - 27 Feb 2014
Subject | Not sure... |
Extension | |
Version | Foswiki 1.1.9 |
Status | Asked |
Related Topics |