Simple, transparent pricing
Pay only when you need it. A "scrape" equals one job run from a single start URL (directory crawl) or one uploaded list processed. Jobs include polite throttling, retries, and robots-aware fetching.
Basic Pack
Try it out
- 2 scrapes (jobs)
- Career page finding + contact/social extraction
- Excel and CSV upload
- Sheets sync available on Monthly
Value Pack
Best value
- 5 scrapes (jobs)
- Career page finding + contact/social extraction
- Excel and CSV upload
- Sheets sync available on Monthly
Monthly
For regular users
- 50 scrapes per month
- Career page finding + contact/social extraction
- Excel, CSV & Google Sheets sync
- Priority support
Frequently Asked Questions
Details about scrapes, limits, and examples.
A scrape is one job run:
- Directory crawl: one start URL, auto‑pagination across pages until your page/depth limits are reached.
- List enrichment: one uploaded file processed (or one linked Google Sheet run).
All jobs include polite throttling, retry logic for transient errors, and robots.txt awareness. JS rendering is used selectively to maximize reliability and cost‑efficiency.
Examples: “Basic 2 scrapes” could be 2 directory crawls, or 1 directory crawl + 1 CSV enrichment.
| Feature | CrawlerIQ | Competitors |
|---|---|---|
| No-code Focus | Yes | Varies |
| Pay-per-use Micro-packs | Yes | No |
| Google Sheets Sync | Yes | No |
What counts as 1 scrape?
One start URL or one uploaded list leads to scraping an entire directory or list, resulting in approximately 300 rows of data with emails and links (varies by source).
How we're different
Unlike Clay/Apify, CrawlerIQ is purpose-built for two jobs: (1) crawl directories with auto-pagination, (2) enrich your company list—export or sync to Google Sheets. No coding. Pay only when you need it.
Contact us