web validator

Robots.txt Strict Validator

Validate robots.txt with directive-by-directive checks and clear line-level issues.

Results

Processing…
Status
Processing...
Details
Processing...
Issues
Processing...
Lines
Processing...

How to use this validator

  1. Paste your robots.txt content.
  2. Run validate to check for user-agent blocks, wildcard/directive shape, and sitemap URLs.
  3. Fix flagged directives (missing agent, bad sitemap URL, malformed wildcard) and re-run.

Rules & checks

Requires at least one User-agent directive.

Flags directives before a User-agent is defined.

Validates Sitemap as an absolute URL and crawl-delay as a number.

Flags unknown directives and missing values.

Inputs explained

  • robots.txt

    Paste the full robots.txt. Keep blocks grouped by User-agent and use absolute Sitemap URLs.

When to use it

  • QA robots.txt before deploying
  • SEO teams checking CMS-generated robots
  • Support debugging crawl issues quickly

Common errors

  • User-agent missing before directives
  • Relative or malformed Sitemap URLs
  • Unknown directives or empty values

Limitations

  • Syntax-level validation only; does not simulate crawler precedence or robots meta.
  • Does not fetch or test live URLs.

Tips

  • Group directives by user-agent for clarity
  • Use absolute URLs for sitemap entries
  • Avoid wildcards in strict contexts unless necessary

Examples

Valid

  • User-agent: *
  • Allow: /
  • Sitemap: https://example.com/sitemap.xml

Bad sitemap URL

  • Sitemap: /sitemap.xml -> flagged as non-absolute

Directive before agent

  • Disallow: /admin (before User-agent) -> flagged

Deep dive

This strict robots.txt validator enforces user-agent directives, sitemap URL validity, and directive ordering entirely in your browser.

Use it to harden robots files generated by CMS/CDN before shipping to crawlers.

FAQs

Is this uploaded?
No. Validation runs locally and clears on refresh.
Do you simulate crawler behavior?
No. This focuses on syntax and directive shape; precedence varies by crawler.

Related validators

All validation happens in your browser. No data is sent, logged, or stored.

Syntax-level validation only; does not simulate crawlers.