Meta Robots noindex nofollow Guide
Apply robots directives intentionally to control indexation while protecting link discovery.
1. Use `noindex` for low-value pages that should still be crawled.
2. Use `nofollow` sparingly and only for untrusted outbound links.
3. Do not combine `noindex` with robots.txt blocking for diagnostics.
4. Confirm directives in rendered HTML, not only source templates.
5. Track affected URLs in GSC coverage and crawl stats.
6. Remove temporary directives after migrations are complete.
Implementation notes
Keep directive decisions near template ownership. Temporary noindex tags become long-term debt when there is no expiry process, so add review dates in your release checklist.
Related pages
FAQ
Can I noindex parameter pages? Yes, if they do not serve unique intent and still need crawl discovery.
Is robots.txt enough? No. Robots blocking prevents crawl and may delay removal from index in some cases.