Generate robots.txt rules locally. Free, private, runs in your browser.
100% private — your files and text never leave your browser. All processing happens locally on your device.
# Generated locally with Convertful # Review before publishing. User-agent: * Allow: / Disallow: /api/ Sitemap: https://example.com/sitemap.xml
Robots.txt tells cooperative crawlers what they may fetch. It is not authentication, privacy protection, or a guaranteed deindexing mechanism. If a page must stay private, protect it with access control instead of relying on robots.txt.
A broad Disallow can accidentally block important pages, CSS, JavaScript, image paths, or your entire site. Generate the file locally, read each user-agent group, and test important URLs before publishing it at `/robots.txt`.
A Sitemap line helps crawlers discover your XML sitemap location. Crawl-delay is recognized by some crawlers but not by Googlebot, so treat it as a courtesy setting rather than a universal crawl-rate control.
No. It only turns the rules you enter into a robots.txt file. Convertful does not crawl your site, test URLs, or submit anything to search engines.
No. Download the generated file, review it carefully, and publish it yourself at your site's `/robots.txt` path when you are ready.
No. Robots.txt is crawler guidance, not access control. Sensitive pages should require authentication or be removed from public access.
Yes. A broad Disallow rule can block important pages, assets, or entire sections from crawling. Review each user-agent group before publishing the file.
Usually yes. A `Sitemap:` line helps crawlers discover your XML sitemap location, especially when it is not at the default `/sitemap.xml` path.