Website Content Crawler
No credit card required
Website Content Crawler
No credit card required
Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.
Do you want to learn more about this Actor?
Get a demoI want to address the memory control issue using llama index integration. where I used the ACTOR_MEMORY_MBYTES parameter for controlling the RAM usage but it didn't reflect on the actor's console. But when we tried to use it on APIFY Ui it worked and we were able to control the memory limit, we wanted to know a workaround or a solution on how we can control memory usage through API which we were using via python, a solution to this will be very helpful.
Hello, and thank you for your interest in this Actor!
Can you please share the code snippet you're using to call this Actor with LlamaIndex? Being able to reproduce this issue would greatly help us with assessing the source of the problem.
Cheers!
Thank you for the response. please find the code snippet below.
I double checked this, and the input for website content crawler does not contain environmentVariables
- is this an experiment or is there some misleading documentation that we should know of?
Also, if you want to control the memory usage of an Actor, you need to set it on the platform. The environment variable only tells the actor how much memory is available, changing it doesn't really change the limit.
Hi, what to do when we want to change memory size from our code base as we are not using the console to crawl the URLs, we found the variable deep in one of the documentation we will tried using it, also do we have a cut off parameter other than the notifications if the usage is going over board.
I see. If you really just want to set the memory used by the website content crawler that you're launching via reader.load_data
, you can do so using the memory_mbytes
parameter - see https://docs.llamaindex.ai/en/stable/api_reference/readers/apify/#llama_index.readers.apify.ApifyActor.load_data.
For example:
1reader.load_data( 2 actor_id='apify/website-content-crawler', 3 run_input={...}, 4 dataset_mapping_function=..., 5 memory_mbytes=2048, 6)
Thanks for the tip,we will try this out and for the part where we want to set the cut off limit any inputs?
- 3.8k monthly users
- 616 stars
- 99.9% runs succeeded
- 3.4 days response time
- Created in Mar 2023
- Modified 2 days ago