Back to template gallery

Scrapy

This example Scrapy spider scrapes page titles from URLs defined in input parameter. It shows how to use Apify SDK for Python and Scrapy pipelines to save results.

Language

python

Tools

scrapy

Use cases

Web scraping

src/main.py

src/spiders/title.py

src/__main__.py

src/items.py

src/pipelines.py

src/settings.py

1"""This module defines the main entry point for the Apify Actor.
2
3This module defines the main coroutine for the Apify Scrapy Actor, executed from the __main__.py file. The coroutine
4processes the Actor's input and executes the Scrapy spider. Additionally, it updates Scrapy project settings by
5applying Apify-related settings. Which includes adding a custom scheduler, retry middleware, and an item pipeline
6for pushing data to the Apify dataset.
7
8Customization:
9--------------
10
11Feel free to customize this file to add specific functionality to the Actor, such as incorporating your own Scrapy
12components like spiders and handling Actor input. However, make sure you have a clear understanding of your
13modifications. For instance, removing `apply_apify_settings` break the integration between Scrapy and Apify.
14
15Documentation:
16--------------
17
18For an in-depth description of the Apify-Scrapy integration process, our Scrapy components, known limitations and
19other stuff, please refer to the following documentation page: https://docs.apify.com/cli/docs/integrating-scrapy.
20"""
21
22from __future__ import annotations
23
24from scrapy.crawler import CrawlerProcess
25
26from apify import Actor
27from apify.scrapy.utils import apply_apify_settings
28
29# Import your Scrapy spider here
30from .spiders.title import TitleSpider as Spider
31
32# Default input values for local execution using `apify run`
33LOCAL_DEFAULT_START_URLS = [{'url': 'https://apify.com'}]
34
35
36async def main() -> None:
37    """
38    Apify Actor main coroutine for executing the Scrapy spider.
39    """
40    async with Actor:
41        Actor.log.info('Actor is being executed...')
42
43        # Process Actor input
44        actor_input = await Actor.get_input() or {}
45        start_urls = actor_input.get('startUrls', LOCAL_DEFAULT_START_URLS)
46        proxy_config = actor_input.get('proxyConfiguration')
47
48        # Open the default request queue for handling URLs to be processed.
49        request_queue = await Actor.open_request_queue()
50
51        # Enqueue the start URLs.
52        for start_url in start_urls:
53            url = start_url.get('url')
54            await request_queue.add_request(url)
55
56        # Apply Apify settings, it will override the Scrapy project settings
57        settings = apply_apify_settings(proxy_config=proxy_config)
58
59        # Execute the spider using Scrapy CrawlerProcess
60        process = CrawlerProcess(settings, install_root_handler=False)
61        process.crawl(Spider)
62        process.start()

Python Scrapy template

A template example built with Scrapy to scrape page titles from URLs defined in the input parameter. It shows how to use Apify SDK for Python and Scrapy pipelines to save results.

Included features

  • Apify SDK for Python - a toolkit for building Apify Actors and scrapers in Python
  • Input schema - define and easily validate a schema for your Actor's input
  • Request queue - queues into which you can put the URLs you want to scrape
  • Dataset - store structured data where each object stored has the same attributes
  • Scrapy - a fast high-level web scraping framework

How it works

This code is a Python script that uses Scrapy to scrape web pages and extract data from them. Here's a brief overview of how it works:

  • The script reads the input data from the Actor instance, which is expected to contain a start_urls key with a list of URLs to scrape.
  • The script then creates a Scrapy spider that will scrape the URLs. This Spider (class TitleSpider) is storing URLs and titles.
  • Scrapy pipeline is used to save the results to the default dataset associated with the Actor run using the push_data method of the Actor instance.
  • The script catches any exceptions that occur during the web scraping process and logs an error message using the Actor.log.exception method.

Resources

Already have a solution in mind?

Sign up for a free Apify account and deploy your code to the platform in just a few minutes! If you want a head start without coding it yourself, browse our Store of existing solutions.