Crawl Url List 1by1
Pricing
Pay per usage
Crawl Url List 1by1
Run the act with a specific configuration and input. If you leave Build, Timeout or Memory empty, the run will use the default values defined in the act. Note that the form remembers settings from the last run.
0.0 (0)
Pricing
Pay per usage
0
Monthly users
0
Last modified
8 years ago
Crawl Url List 1 by 1
Apify.com act that takes a list of urls and starts given crawler for each of the urls.
Crawler is published at Apify.com as mtrunkat/crawl-url-list-1by1.
You can start this act via POST request to following url with it's input as JSON payload:
https://api.apifier.com/v2/acts/mtrunkat~crawl-url-list-1by1?token=[YOUR_API_TOKEN]
Example input:
You can either send url of publicly hosted file containing your url list (one url per line):
1{ 2 "urlListFile": "http://example.com/urllist.txt", 3 "crawlerId": "ytXL3jaRKwrfWC9tz", 4 "concurrency": 2 5}
Or you can pass urls directly:
1{ 2 "urlList": ["http://example.com", "http://google.com"], 3 "crawlerId": "ytXL3jaRKwrfWC9tz", 4 "concurrency": 2 5}
Possible options:
Options crawlerId, cocurrency and one of urlListFile and urlList are required!
Option | Type | Description |
---|---|---|
urlListFile | String | Url of the texfile containing urls to be crawled with one url per line |
urlList | Array | Array of urls to be crawled. |
crawlerId | String | Crawler ID. |
concurrency | Number | Concurrency of crawler executions. |
crawlerSettings | Object | Overrides of crawler settings passed to startExecution call |
Pricing
Pricing model
Pay per usageThis Actor is paid per platform usage. The Actor is free to use, and you only pay for the Apify platform usage.