Blog / Dated Content Crawler avatar
Blog / Dated Content Crawler

Pricing

Pay per usage

Go to Store
Blog / Dated Content Crawler

Blog / Dated Content Crawler

Developed by

Diarmuid

Maintained by Community

Crawl an entire blog / knowledge base or filter to just the new content. Supporting relevant AI queries by filtering pages by date

5.0 (2)

Pricing

Pay per usage

5

Monthly users

13

Runs succeeded

97%

Last modified

3 days ago

Start URLs

startUrlsarrayRequired

URLs to start with.

Date

relativeStartDatestringOptional

Optional: Select relative start date by which to filter pages. Any post published before this date will be excluded. This is calculated relative to the time the crawler is run (eg: '2 weeks' would include posts from 2 weeks ago to now). Format in format 'X days' or 'X weeks' or 'X months' or 'X years'. If no time period is passed in, it will be treated as a number of seconds.

Date

startDatestringOptional

Optional: Select start date by which to filter pages. Any post published before this date will be excluded. Format in format YYYY-MM-DD

Date selector

dateSelectorstringOptional

Optional: CSS Selector to identify where the date is located on the page. This can make the date filtering more accurate if we don't get it right on the first round. You should use this if there's multiple dates on the page.

Include results with no date

includeResultsWithNoDatebooleanOptional

Optional: If true, the crawler will include results where no date was detected on the page.

Default value of this property is false

Wait for selector

waitForSelectorstringOptional

CSS Selector to wait for for page load. Useful when content on the page loads asynchronously. This is optional and in most cases won't be needed.

Remove HTML elements

removeElementsCssSelectorstringOptional

Optional: CSS Selector to identify elements to remove from the page. This is useful for removing elements that are not needed for the content. For example, you can remove the header and footer of the page.

Default value of this property is "nav, footer, script, style, noscript, svg, img[src^='data:'],\n[role=\"alert\"],\n[role=\"banner\"],\n[role=\"dialog\"],\n[role=\"alertdialog\"],\n[role=\"region\"][aria-label*=\"skip\" i],\n[aria-modal=\"true\"]"

Request Timeout

requestTimeoutSecsintegerOptional

Optional: How long the crawler will wait for a request to complete. Default is 60 seconds.

Default value of this property is 60

Wait for dynamic content

dynamicContentWaitSecsintegerOptional

Optional: Used in conjunction with waitForSelector. How long the crawler will wait for the selector to appear. Default is 10 seconds.

Default value of this property is 10

Use Sitemaps

useSitemapsbooleanOptional

Optional: If checked, the crawler will use the sitemap.xml file to find additional URLs to crawl.

Default value of this property is true

Filter by document last modified

filterByDocumentLastModifiedbooleanOptional

Optional: If true, the crawler will include the document's 'last modified' date in the output.

Default value of this property is false

Proxy configuration

proxyConfigurationobjectOptional

Select proxies to be used by your crawler.

Crawler Type

crawlerTypeEnumOptional

Select the crawler type to use. Adaptive crawlers run faster by automatically switching between raw http and browser requests. Firefox is less likely to be blocked by websites but may ocassionally fail in comparison to chrome. NOTE: You cannot use the adaptive crawler options with the 'filter by document last modified' option.

Value options:

"adaptive:firefox": string"adaptive:chrome": string"playwright:firefox": string"playwright:chrome": string

Default value of this property is "adaptive:firefox"

Select URLs by

selectUrlsByEnumOptional

Optional: Select URLs by different kinds of patterns. This is similar to includeUrlGlobs but allows for simpler selection. If includeUrlGlobs is provided, this option is ignored. Subpath (default): Selects URLs that are under the path of the startUrls provided. All: Selects all URLs found. Same Hostname: Selects URLs that have the same hostname (does not include subdomains). Same Domain: Selects URLs that have the same domain (i.e includes subdomains). Same Origin: Selects URLs that have the same hostname and protocol (http/https).

Value options:

"subpath": string"all": string"same-hostname": string"same-domain": string"same-origin": string

Default value of this property is "subpath"

Include URL globs

includeUrlGlobsarrayOptional

Optional: An array of URL globs to include in the crawl. If not provided, the crawler will include only urls under the path of the startUrls provided. You can test globs on https://www.digitalocean.com/community/tools/glob.

Exclude URL globs

excludeUrlGlobsarrayOptional

Optional: An array of URL globs to exclude from the crawl in order to avoid crawling a particular page or set of pages.

Maximum number of pages to crawl

maxCrawlPagesintegerOptional

Optional: The maximum number of pages to visit. This is useful for limiting the crawl to a specific number of pages to avoid overspending.

Pricing

Pricing model

Pay per usage

This Actor is paid per platform usage. The Actor is free to use, and you only pay for the Apify platform usage.