Crawler Puppeteer
No credit card required
Crawler Puppeteer
No credit card required
Copy of https://github.com/apifytech/actor-crawler-puppeteer
Actor Metrics
1 monthly user
-
2 bookmarks
Created in Jan 2019
Modified 6 years ago
Crawler Puppeteer
No credit card required
No credit card required
Copy of https://github.com/apifytech/actor-crawler-puppeteer
Actor Metrics
1 monthly user
2 bookmarks
Created in Jan 2019
Modified 6 years ago
useRequestQueue
booleanOptional
Request queue enables recursive crawling and the use of Pseudo-URLs and Link selector.
Default value of this property is false
pseudoUrls
arrayOptional
Pseudo-URLs to match links in the page that you want to enqueue. Combine with Link selector to tell the crawler where to find links.
Default value of this property is []
linkSelector
stringOptional
CSS selector matching elements with 'href' attributes that should be enqueued. To enqueue urls from '...
proxyConfiguration
objectOptional
Choose to use no proxy, Apify Proxy, or provide custom proxy URLs.
Default value of this property is {}
debugLog
booleanOptional
Debug messages will be included in the log. Use context.log.debug('message')
to log your own debug messages.
Default value of this property is false
browserLog
booleanOptional
Console messages from the Browser will be included in the log. This may result in the log being flooded by error messages, warnings and other messages of little value, especially with high concurrency.
Default value of this property is false
downloadMedia
booleanOptional
Crawler will skip downloading media such as images, fonts, videos and sounds. This helps to speed up the crawl, but may break certain websites.
Default value of this property is false
downloadCss
booleanOptional
Crawler will skip downloading CSS stylesheets. This helps to speed up the crawl, but may break certain websites.
Default value of this property is false
ignoreSslErrors
booleanOptional
Crawler will ignore SSL certificate errors.
Default value of this property is false
maxRequestRetries
integerOptional
Maximum number of times the request for the page will be retried in case of an error. Setting it to 0 means that the request will be attempted once and will not be retried if it fails.
Default value of this property is 3
maxPagesPerCrawl
integerRequired
Maximum number of pages that the crawler will open. 0 means unlimited.
maxResultsPerCrawl
integerOptional
Maximum number of results that will be saved to dataset. The crawler will terminate afterwards. 0 means unlimited.
Default value of this property is 0
maxCrawlingDepth
integerOptional
Defines how many links away from the StartURLs will the crawler descend. 0 means unlimited.
Default value of this property is 0
pageLoadTimeoutSecs
integerOptional
Maximum time the crawler will allow a web page to load in seconds.
Default value of this property is 60