How to use Robots Txt Tester
- 1
Open the tool.
- 2
Enter your input.
- 3
Get your output instantly.
Robots Txt Tester utility for fast and secure processing. Perfect for users needing a robots.txt tester.
Open the tool.
Enter your input.
Get your output instantly.
Yes, it works entirely in your browser.
Yes, 100% free with no limits.
The robots.txt file is a plain text file placed at the root of your website (https://example.com/robots.txt) that tells web crawlers which parts of your site they're allowed to access.
It's part of the Robots Exclusion Protocol (REP) — an informal but universally respected standard that well-behaved bots follow voluntarily. It's not technically enforceable (a malicious scraper can ignore it), but all major search engine crawlers — Google, Bing, Yandex, DuckDuckGo — honor it.
User-agent: *
Disallow: /admin/
Disallow: /private/
Allow: /admin/public-page.html
User-agent: Googlebot
Disallow: /staging/
User-agent: AdsBot-Google
Disallow: /
Sitemap: https://example.com/sitemap.xml
User-agent: — Which bot this rule applies to. * means all bots. Named bots (like Googlebot) get their own specific rules.
Disallow: — Paths the bot should not crawl. An empty Disallow: means "allow everything" (this is the same as having no rule at all).
Allow: — Overrides a broader Disallow. Useful when you want to block a folder but allow one specific file inside it.
Sitemap: — Points crawlers to your sitemap file. Not part of the original protocol but universally supported.
Using robots.txt to hide sensitive content
robots.txt is public — anyone can read it. Disallowing /admin/ effectively tells the world that your admin panel is at /admin/. For actual security, use authentication. robots.txt only controls crawling, not access.
Blocking CSS and JavaScript Googlebot needs to render your pages to understand them. If your CSS and JS files are blocked, Google sees a broken page with missing styles. This can hurt your rankings. Avoid:
# Breaks Google's rendering
Disallow: /static/css/
Disallow: /static/js/
Blocking your own sitemap
Some CMS configurations accidentally block *.xml files, which prevents crawlers from reading the sitemap. Check that your sitemap URL is accessible.
Wrong path format
Paths must start with /. Disallow: admin/ (without the leading slash) is invalid.
Conflicting rules
When Allow and Disallow rules both match a URL, the most specific rule wins. If they're the same length, Allow wins. Many webmasters expect the opposite — test carefully.
Some bots have specific user-agent strings you might want to target:
| Bot | User-A...
Looking for a more detailed deep-dive and advanced tips?
Read Full Article on our BlogYour data never leaves this device. All processing is handled locally by JavaScript.
Paste your robots.txt file to test if specific URLs are blocked or allowed for Googlebot and other crawlers.