Back to template gallery

Crawlee + Cheerio

A scraper example that uses Cheerio to parse HTML. It's fast, but it can't run the website's JavaScript or pass JS anti-scraping challenges.

Language

javascript

Tools

nodejs

crawlee

cheerio

Use cases

Starter

Web scraping

src/main.js

1// Apify SDK - toolkit for building Apify Actors (Read more at https://docs.apify.com/sdk/js/)
2import { Actor } from 'apify';
3// Crawlee - web scraping and browser automation library (Read more at https://crawlee.dev)
4import { CheerioCrawler, Dataset } from 'crawlee';
5
6// The init() call configures the Actor for its environment. It's recommended to start every Actor with an init()
7await Actor.init();
8
9// Structure of input is defined in input_schema.json
10const { startUrls = ['https://apify.com'], maxRequestsPerCrawl = 100 } = (await Actor.getInput()) ?? {};
11
12// Proxy configuration to rotate IP addresses and prevent blocking (https://docs.apify.com/platform/proxy)
13const proxyConfiguration = await Actor.createProxyConfiguration();
14
15const crawler = new CheerioCrawler({
16 proxyConfiguration,
17 maxRequestsPerCrawl,
18 async requestHandler({ enqueueLinks, request, $, log }) {
19 log.info('enqueueing new URLs');
20 await enqueueLinks();
21
22 // Extract title from the page.
23 const title = $('title').text();
24 log.info(`${title}`, { url: request.loadedUrl });
25
26 // Save url and title to Dataset - a table-like storage.
27 await Dataset.pushData({ url: request.loadedUrl, title });
28 },
29});
30
31await crawler.run(startUrls);
32
33// Gracefully exit the Actor process. It's recommended to quit all Actors with an exit()
34await Actor.exit();

JavaScript Crawlee & CheerioCrawler Actor Template

This template example was built with Crawlee to scrape data from a website using Cheerio wrapped into CheerioCrawler.

Quick Start

Once you've installed the dependencies, start the Actor:

$apify run

Once your Actor is ready, you can push it to the Apify Console:

apify login # first, you need to log in if you haven't already done so
apify push

Project Structure

.actor/
├── actor.json # Actor config: name, version, env vars, runtime settings
├── dataset_schena.json # Structure and representation of data produced by an Actor
├── input_schema.json # Input validation & Console form definition
└── output_schema.json # Specifies where an Actor stores its output
src/
└── main.js # Actor entry point and orchestrator
storage/ # Local storage (mirrors Cloud during development)
├── datasets/ # Output items (JSON objects)
├── key_value_stores/ # Files, config, INPUT
└── request_queues/ # Pending crawl requests
Dockerfile # Container image definition

For more information, see the Actor definition documentation.

How it works

This code is a JavaScript script that uses Cheerio to scrape data from a website. It then stores the website titles in a dataset.

  • The crawler starts with URLs provided from the input startUrls field defined by the input schema. Number of scraped pages is limited by maxPagesPerCrawl field from the input schema.
  • The crawler uses requestHandler for each URL to extract the data from the page with the Cheerio library and to save the title and URL of each page to the dataset. It also logs out each result that is being saved.

What's included

  • Apify SDK - toolkit for building Actors
  • Crawlee - web scraping and browser automation library
  • Input schema - define and easily validate a schema for your Actor's input
  • Dataset - store structured data where each object stored has the same attributes
  • Cheerio - a fast, flexible & elegant library for parsing and manipulating HTML and XML
  • Proxy configuration - rotate IP addresses to prevent blocking

Resources

Creating Actors with templates

Already have a solution in mind?

Sign up for a free Apify account and deploy your code to the platform in just a few minutes! If you want a head start without coding it yourself, browse our Store of existing solutions.