Back to template gallery

Crawlee + BeautifulSoup

Crawl and scrape websites using Crawlee and BeautifulSoup. Start from a given start URLs, and store results to Apify dataset.

Language

python

Tools

crawlee

beautifulsoup

Use cases

Starter

Web scraping

src/main.py

src/__main__.py

1"""This module defines the main entry point for the Apify Actor.
2
3Feel free to modify this file to suit your specific needs.
4
5To build Apify Actors, utilize the Apify SDK toolkit, read more at the official documentation:
6https://docs.apify.com/sdk/python
7"""
8
9from apify import Actor, Request
10from crawlee.beautifulsoup_crawler import BeautifulSoupCrawler, BeautifulSoupCrawlingContext
11
12
13async def main() -> None:
14    """Main entry point for the Apify Actor.
15
16    This coroutine is executed using `asyncio.run()`, so it must remain an asynchronous function for proper execution.
17    Asynchronous execution is required for communication with Apify platform, and it also enhances performance in
18    the field of web scraping significantly.
19    """
20    async with Actor:
21        # Retrieve the Actor input, and use default values if not provided.
22        actor_input = await Actor.get_input() or {}
23        start_urls = [url.get('url') for url in actor_input.get('start_urls', [{'url': 'https://apify.com'}])]
24
25        # Exit if no start URLs are provided.
26        if not start_urls:
27            Actor.log.info('No start URLs specified in Actor input, exiting...')
28            await Actor.exit()
29
30        # Create a crawler.
31        crawler = BeautifulSoupCrawler(
32            # Limit the crawl to max requests. Remove or increase it for crawling all links.
33            max_requests_per_crawl=50,
34        )
35
36        # Define a request handler, which will be called for every request.
37        @crawler.router.default_handler
38        async def request_handler(context: BeautifulSoupCrawlingContext) -> None:
39            url = context.request.url
40            Actor.log.info(f'Scraping {url}...')
41
42            # Extract the desired data.
43            data = {
44                'url': context.request.url,
45                'title': context.soup.title.string if context.soup.title else None,
46                'h1s': [h1.text for h1 in context.soup.find_all('h1')],
47                'h2s': [h2.text for h2 in context.soup.find_all('h2')],
48                'h3s': [h3.text for h3 in context.soup.find_all('h3')],
49            }
50
51            # Store the extracted data to the default dataset.
52            await context.push_data(data)
53
54            # Enqueue additional links found on the current page.
55            await context.enqueue_links()
56
57        # Run the crawler with the starting requests.
58        await crawler.run(start_urls)

Python Crawlee with BeautifulSoup template

A template for web scraping data from websites starting from provided URLs using Python. The starting URLs are passed through the Actor's input schema, defined by the input schema. The template uses Crawlee for Python for efficient web crawling, handling each request through a user-defined handler that uses Beautiful Soup to extract data from the page. Enqueued URLs are managed in the request queue, and the extracted data is saved in a dataset for easy access.

Included features

  • Apify SDK - a toolkit for building Apify Actors in Python.
  • Crawlee for Python - a web scraping and browser automation library.
  • Input schema - define and validate a schema for your Actor's input.
  • Request queue - manage the URLs you want to scrape in a queue.
  • Dataset - store and access structured data extracted from web pages.
  • Beautiful Soup - a library for pulling data out of HTML and XML files.

Resources

Already have a solution in mind?

Sign up for a free Apify account and deploy your code to the platform in just a few minutes! If you want a head start without coding it yourself, browse our Store of existing solutions.