web validator

robots.txt Validator

Validate robots.txt syntax, user-agents, and allow/disallow rules locally.

Results

Processing…
Status
Processing...
Details
Processing...
Allow rules
Processing...
Disallow rules
Processing...
Sitemaps
Processing...

How to use this validator

  1. Paste your robots.txt contents.
  2. Run validate to check user-agent blocks, allow/disallow directives, and sitemap lines.
  3. Fix missing user-agent directives or malformed sitemap URLs and re-validate.

Rules & checks

Requires at least one User-agent directive.

Counts Allow/Disallow directives per agent and keeps them grouped.

Parses Sitemap lines to confirm absolute URLs.

Runs fully client-side; no fetches or uploads.

Inputs explained

  • robots.txt content

    Paste the full robots.txt. Keep User-agent blocks clear and include absolute sitemap URLs when applicable.

When to use it

  • QA robots.txt before deploying SEO changes
  • Validate robots output from CMS/CDN rewrites
  • Check staging vs production directives for mistakes

Common errors

  • No User-agent directive
  • Relative or malformed Sitemap URLs
  • Unstructured blocks mixing rules for multiple agents

Limitations

  • Syntax-focused; does not simulate full crawler precedence or wildcards.
  • Does not fetch or test live URLs; structure-only.

Tips

  • Include at least one Sitemap entry for discoverability.
  • Group rules per User-agent to avoid ambiguity.
  • Use absolute URLs for sitemaps.

Examples

Valid

  • User-agent: *
  • Disallow: /private
  • Allow: /public
  • Sitemap: https://example.com/sitemap.xml

Missing user-agent

  • Disallow: / -> Invalid (no User-agent)

Deep dive

This robots.txt validator checks user-agent blocks, allow/disallow directives, and sitemap lines entirely in your browser.

Use it to QA CMS/CDN-generated robots.txt files before shipping so crawlers receive clean directives.

FAQs

Do you fetch URLs?
No. Validation is structural and stays in your browser.
Do you enforce wildcard patterns?
No. This focuses on syntax and presence of directives; crawler behavior varies by bot.
Is anything stored?
No. Input stays local and clears on refresh.

Related validators

All validation happens in your browser. No data is sent, logged, or stored.

Syntax-focused; does not simulate full crawler behavior or precedence.