Back to template gallery

Selenium + Chrome

Scraper example built with Selenium and headless Chrome browser to scrape a website and save the results to storage. A popular alternative to Playwright.

Language

python

Tools

selenium

Use cases

Web scraping

src/main.py

src/__main__.py

1"""This module defines the main entry point for the Apify Actor.
2
3Feel free to modify this file to suit your specific needs.
4
5To build Apify Actors, utilize the Apify SDK toolkit, read more at the official documentation:
6https://docs.apify.com/sdk/python
7"""
8
9import asyncio
10from urllib.parse import urljoin
11
12from selenium import webdriver
13from selenium.webdriver.chrome.options import Options as ChromeOptions
14from selenium.webdriver.common.by import By
15
16from apify import Actor, Request
17
18# To run this Actor locally, you need to have the Selenium Chromedriver installed.
19# Follow the installation guide at: https://www.selenium.dev/documentation/webdriver/getting_started/install_drivers/
20# When running on the Apify platform, the Chromedriver is already included in the Actor's Docker image.
21
22
23async def main() -> None:
24    """Main entry point for the Apify Actor.
25
26    This coroutine is executed using `asyncio.run()`, so it must remain an asynchronous function for proper execution.
27    Asynchronous execution is required for communication with Apify platform, and it also enhances performance in
28    the field of web scraping significantly.
29    """
30    async with Actor:
31        # Retrieve the Actor input, and use default values if not provided.
32        actor_input = await Actor.get_input() or {}
33        start_urls = actor_input.get('start_urls', [{'url': 'https://apify.com'}])
34        max_depth = actor_input.get('max_depth', 1)
35
36        # Exit if no start URLs are provided.
37        if not start_urls:
38            Actor.log.info('No start URLs specified in actor input, exiting...')
39            await Actor.exit()
40
41        # Open the default request queue for handling URLs to be processed.
42        request_queue = await Actor.open_request_queue()
43
44        # Enqueue the start URLs with an initial crawl depth of 0.
45        for start_url in start_urls:
46            url = start_url.get('url')
47            Actor.log.info(f'Enqueuing {url} ...')
48            request = Request.from_url(url, user_data={'depth': 0})
49            await request_queue.add_request(request)
50
51        # Launch a new Selenium Chrome WebDriver and configure it.
52        Actor.log.info('Launching Chrome WebDriver...')
53        chrome_options = ChromeOptions()
54
55        if Actor.config.headless:
56            chrome_options.add_argument('--headless')
57
58        chrome_options.add_argument('--no-sandbox')
59        chrome_options.add_argument('--disable-dev-shm-usage')
60        driver = webdriver.Chrome(options=chrome_options)
61
62        # Test WebDriver setup by navigating to an example page.
63        driver.get('http://www.example.com')
64        assert driver.title == 'Example Domain'
65
66        # Process the URLs from the request queue.
67        while request := await request_queue.fetch_next_request():
68            url = request.url
69            depth = request.user_data['depth']
70            Actor.log.info(f'Scraping {url} ...')
71
72            try:
73                # Navigate to the URL using Selenium WebDriver. Use asyncio.to_thread for non-blocking execution.
74                await asyncio.to_thread(driver.get, url)
75
76                # If the current depth is less than max_depth, find nested links and enqueue them.
77                if depth < max_depth:
78                    for link in driver.find_elements(By.TAG_NAME, 'a'):
79                        link_href = link.get_attribute('href')
80                        link_url = urljoin(url, link_href)
81
82                        if link_url.startswith(('http://', 'https://')):
83                            Actor.log.info(f'Enqueuing {link_url} ...')
84                            request = Request.from_url(link_url, user_data={'depth': depth + 1})
85                            await request_queue.add_request(request)
86
87                # Extract the desired data.
88                data = {
89                    'url': url,
90                    'title': driver.title,
91                }
92
93                # Store the extracted data to the default dataset.
94                await Actor.push_data(data)
95
96            except Exception:
97                Actor.log.exception(f'Cannot extract data from {url}.')
98
99            finally:
100                # Mark the request as handled to ensure it is not processed again.
101                await request_queue.mark_request_as_handled(request)
102
103        driver.quit()

Python Selenium & Chrome template

A template example built with Selenium and a headless Chrome browser to scrape a website and save the results to storage. The URL of the web page is passed in via input, which is defined by the input schema. The template uses the Selenium WebDriver to load and process the page. Enqueued URLs are stored in the default request queue. The data are then stored in the default dataset where you can easily access them.

Included features

  • Apify SDK for Python - a toolkit for building Apify Actors and scrapers in Python
  • Input schema - define and easily validate a schema for your Actor's input
  • Request queue - queues into which you can put the URLs you want to scrape
  • Dataset - store structured data where each object stored has the same attributes
  • Selenium - a browser automation library

How it works

This code is a Python script that uses Selenium to scrape web pages and extract data from them. Here's a brief overview of how it works:

  • The script reads the input data from the Actor instance, which is expected to contain a start_urls key with a list of URLs to scrape and a max_depth key with the maximum depth of nested links to follow.
  • The script enqueues the starting URLs in the default request queue and sets their depth to 1.
  • The script processes the requests in the queue one by one, fetching the URL using requests and parsing it using Selenium.
  • If the depth of the current request is less than the maximum depth, the script looks for nested links in the page and enqueues their targets in the request queue with an incremented depth.
  • The script extracts the desired data from the page (in this case, titles of each page) and pushes them to the default dataset using the push_data method of the Actor instance.
  • The script catches any exceptions that occur during the web scraping process and logs an error message using the Actor.log.exception method.

Resources

Already have a solution in mind?

Sign up for a free Apify account and deploy your code to the platform in just a few minutes! If you want a head start without coding it yourself, browse our Store of existing solutions.