Website Content Crawler avatar

Website Content Crawler

Try for free

No credit card required

View all Actors
Website Content Crawler

Website Content Crawler

apify/website-content-crawler
Try for free

No credit card required

Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗LangChain, LlamaIndex, and the wider LLM ecosystem.

Do you want to learn more about this Actor?

Get a demo
WR

Poor CPU utilization due to low usage limit

Open

write2souvik opened this issue
21 days ago

I'm trying to optimize this actor for crawling larger sites - we have cases averaging around 1500 distinct URLs (and we do actually want all of them). Presently, with memory set to 4GB, these runs would take several hours to complete successfully.

I noticed in the logs (run URL attached) that we're getting very poor concurrency - desired concurrency is at most 2, and often 1. Sample status log:

PlaywrightCrawler:AutoscaledPool: state {"currentConcurrency":1,"desiredConcurrency":1,"systemStatus":{"isSystemIdle":false,"memInfo":{"isOverloaded":false,"limitRatio":0.2,"actualRatio":0},"eventLoopInfo":{"isOverloaded":false,"limitRatio":0.6,"actualRatio":0.124},"cpuInfo":{"isOverloaded":true,"limitRatio":0.4,"actualRatio":0.404},"clientInfo":{"isOverloaded":false,"limitRatio":0.3,"actualRatio":0}}}

The one that catches my attention is the CPU limit ratio being 0.4. I recognize that there are two pools in play, so they can't both take up the full CPU allocation, but our workload appears to be using the HttpCrawler basically not at all, since it only uses that pool for PDF downloads (at the moment, it's done 17 requests via HTTP, and 331 via Playwright). As a result, at least half of our CPU quota is sitting idle. The job also peaked at 2.6/4GB memory usage (65%), so there's clearly some headroom that we could be using but aren't.

Before I crank up the resources allocated to the run, is there anything we can do to actually make better use of them? I'd prefer not to allocate large amounts of capacity if we're only actually going to be able to make use of half of it...

janbuchar avatar

Hello, and thank you for your interest in Website Content Crawler! I recommend you try running the actor with 8GB of RAM or more. This will also provide you with more CPU capacity. It is likely that the headless browser will find a way to use all the available memory to make crawling faster, don't worry about not using the memory before you try it. Memory management in browsers is a very complex topic and it's hard to predict how exactly it will behave when you give it more resources.

WR

write2souvik

20 days ago

I can appreciate that browser memory management is unpredictable, especially across a potentially broad range of web pages. The concern I have is that it empirically is not finding a way to use all the available memory or CPU available to it. The actor is self-throttling at 40% CPU utilization, and is only using about 60% of memory. Should I somehow expect those ratios to change at larger total resource allocation?

janbuchar avatar

Yes, it's possible that the browser will be able to work faster with a larger memory allocation. I understand it may be counter-intuitive, but it's a good idea to try it first.

Developer
Maintained by Apify
Actor metrics
  • 3k monthly users
  • 465 stars
  • 99.9% runs succeeded
  • 3.1 days response time
  • Created in Mar 2023
  • Modified 10 days ago