Website Content Crawler avatar
Website Content Crawler

Pricing

Pay per usage

Go to Store
Website Content Crawler

Website Content Crawler

Developed by

Apify

Maintained by Apify

Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.

4.6 (38)

Pricing

Pay per usage

1.2k

Monthly users

6.3k

Runs succeeded

>99%

Response time

3.5 days

Last modified

a day ago

KO

number of saved lines

Open

kocsi opened this issue
8 days ago

I would like to set it to save 100 lines from each input URL. Currently, the actor stops when it reaches the first 100 lines, even if I enter, say, 10 URLs and the first one already finds 100 lines of data.

jiri.spilka avatar

Hi,

Thank you for using Website Content Crawler.

I’m not sure I fully understand your question, so let me try to clarify a couple of points:

  • Regarding content: The crawler will save all the text content that is present on the website.
  • Regarding limiting the number of results: You have set "maxResults": 100, which means the crawler will save content from up to 100 URLs. If you need less/more you need to change this setting.

If you could please clarify your question a bit more, I’d be happy to assist you further.

Thank you, Jiri

KO

kocsi

6 hours ago

I would like to run 10 urls and I want it to fetch 100 lines of data from each url. Now if I start it and it finds 100 lines of data on the first url, it will save it and stop.

Pricing

Pricing model

Pay per usage

This Actor is paid per platform usage. The Actor is free to use, and you only pay for the Apify platform usage.