Cheerio Scraper avatar
Cheerio Scraper

No credit card required

View all Actors
Cheerio Scraper

Cheerio Scraper

apify/cheerio-scraper

No credit card required

Start URLs

startUrlsarrayRequired

URLs to start with

Use request queue

useRequestQueuebooleanOptional

Request queue enables recursive crawling and the use of Pseudo-URLs and Link selector.

Default value of this property is true

Pseudo-URLs

pseudoUrlsarrayOptional

Pseudo-URLs to match links in the page that you want to enqueue. Combine with Link selector to tell the crawler where to find links.

Default value of this property is []

linkSelectorstringOptional

CSS selector matching elements with 'href' attributes that should be enqueued. To enqueue urls from '<div class="my-class" href=...>' tags, you would enter 'div.my-class'.

Page function

pageFunctionstringRequired

Function executed for each request

Proxy configuration

proxyConfigurationobjectOptional

Choose to use no proxy, Apify Proxy, or provide custom proxy URLs.

Default value of this property is {}

Debug log

debugLogbooleanOptional

Debug messages will be included in the log. Use context.log.debug('message') to log your own debug messages.

Default value of this property is false

Ignore SSL errors

ignoreSslErrorsbooleanOptional

Crawler will ignore SSL certificate errors.

Default value of this property is false

(UNSTABLE) Save cookies

useCookieJarbooleanOptional

The scraper will use a cookie jar to persist cookies between requests. This is a temporary solution and the feature is UNSTABLE, meaning that it will most likely be removed in the future and replaced with a different API. Use at your own risk.

Default value of this property is false

Max request retries

maxRequestRetriesintegerOptional

Maximum number of times the request for the page will be retried in case of an error. Setting it to 0 means that the request will be attempted once and will not be retried if it fails.

Default value of this property is 3

Max pages per crawl

maxPagesPerCrawlintegerOptional

Maximum number of pages that the crawler will open. 0 means unlimited.

Default value of this property is 0

Max result records

maxResultsPerCrawlintegerOptional

Maximum number of results that will be saved to dataset. The crawler will terminate afterwards. 0 means unlimited.

Default value of this property is 0

Max crawling depth

maxCrawlingDepthintegerOptional

Defines how many links away from the StartURLs will the crawler descend. 0 means unlimited.

Default value of this property is 0

Max concurrency

maxConcurrencyintegerOptional

Defines how many pages can be processed by the scraper in parallel. The scraper automatically increases and decreases concurrency based on available system resources. Use this option to set a hard limit.

Default value of this property is 50

Page load timeout

pageLoadTimeoutSecsintegerOptional

Maximum time the crawler will allow a web page to load in seconds.

Default value of this property is 60

Page function timeout

pageFunctionTimeoutSecsintegerOptional

Maximum time the crawler will wait for the page function to execute in seconds.

Default value of this property is 60

Custom data

customDataobjectOptional

This object will be available on pageFunction's context as customData.

Default value of this property is {}

Initial cookies

initialCookiesarrayOptional

The provided cookies will be pre-set to all pages the scraper opens.

Default value of this property is []

Developer
Community logoMaintained by Community
Actor metrics
  • 1 monthly users
  • 100.0% runs succeeded
  • Modified about 5 years ago
Categories

You might also like these Actors