Quickstart
From zero to first crawl in under 5 minutes.
1. Install
Scrape ships as a Python package. We recommend installing it through uv:
filed under · bash.bash
uv tool install scrape
scrape --helpOr use pip if you prefer:
filed under · bash.bash
pipx install scrape2. Create an account
Spin up the API server and dashboard:
filed under · bash.bash
scrape-api &
cd web && pnpm devOpen http://localhost:3000/register and create an account. The first registered user becomes the admin.
3. Run your first crawl
From the dashboard, click New job, paste a few URLs, and choose Tier 0 — HTTP only. The job will run in the background and stream live progress to the detail page.
Or use the CLI:
filed under · bash.bash
scrape crawl https://books.toscrape.com/catalogue/sapiens-a-brief-history-of-humankind_996/index.html --max-tier 0 --no-browser4. Get your data
Once the job finishes:
- The dashboard shows extracted rows in a table
- Hit JSON or CSV to download a file
- Or query the SQLite store directly:
data/scrape.db
Next steps
- Read about core concepts — tiers, sessions, fingerprints
- Learn how tier escalation picks the right approach automatically
- Plug in residential proxies for high-volume scraping