All-In-One Reddit Scraper | Fast & Reliable | $1.8 / 1K avatar
All-In-One Reddit Scraper | Fast & Reliable | $1.8 / 1K

Pricing

$1.80 / 1,000 results

Go to Store
All-In-One Reddit Scraper | Fast & Reliable | $1.8 / 1K

All-In-One Reddit Scraper | Fast & Reliable | $1.8 / 1K

Developed by

Fatih Tahta

Fatih Tahta

Maintained by Community

All-in-one Reddit Scraper. Scrape posts and full comment threads from any search, subreddit, user, or direct post URL. This enterprise-grade scraper is the fastest in the market and delivers clean and detailed JSON.

5.0 (2)

Pricing

$1.80 / 1,000 results

2

Total users

10

Monthly users

10

Runs succeeded

>99%

Last modified

7 hours ago

Reddit Scraper Pro

Slug: fatihtahta/reddit-scraper
Price: $1.80 per 1,000 saved items (posts or comments)

The all-in-one Reddit data solution. Go beyond simple search—scrape posts, full comment threads, subreddits, and user pages with a single tool. Whether you provide search queries or a list of direct URLs, this scraper uses enterprise-grade residential proxies to bypass blocks and deliver clean, structured JSON, ready for any analysis.


Features

  • Scrape Anything on Reddit: Provides two powerful modes:
    • Search Mode: Scrape search results for any query with advanced sorting and time filters.
    • URL Mode: Directly scrape one or more URLs, including subreddits, user pages, or individual posts.
  • Deep Comment Scraping (Optional): A simple switch (scrapeComments) allows you to extract not just the post, but all visible comments and their replies, creating a complete dataset of the entire discussion.
  • Enterprise-Grade Proxies: All requests are routed through a pool of residential proxies, ensuring high success rates and avoiding blocks.
  • Built-in Resiliency: Automatically retries failed requests with intelligent backoff, gracefully handling network errors and timeouts.
  • Clean, Structured JSON: Outputs two distinct item types (post and comment) with clear schemas, perfect for market research, social listening, brand monitoring, or academic analysis.
  • Fast and Efficient: Built with TypeScript and the latest Crawlee framework for high-performance, concurrent scraping.

How It Works

The scraper operates with a clear priority system:

  1. Check for URLs: If the urls input is provided, it will be used. Search queries will be ignored.
  2. Normalize URLs: Each URL is intelligently identified (as a post, subreddit, user page, or search URL) and converted to the correct API endpoint.
  3. Use Search Queries: If no URLs are provided, the scraper falls back to queries mode, building search requests based on your terms and filters.
  4. Crawl Listings: It crawls the resulting pages (search results, subreddit pages, etc.), following pagination cursors to find post links.
  5. Fetch Posts & Comments: For each post found, it fetches the full data payload.
    • If scrapeComments is true, it recursively processes the entire comment tree, saving each comment as a separate item.
    • If scrapeComments is false, it only saves the post data and ignores comments, saving time and resources.
  6. Push to Dataset: All extracted posts and comments are pushed as clean JSON objects into the dataset.

Input

The scraper accepts the following inputs. If urls is provided, queries will be ignored.

  • queries (array of strings): A list of search terms to look up on Reddit.
  • urls (array of strings): A list of specific Reddit URLs to scrape. This has priority over queries.
  • scrapeComments (boolean, default: false): If true, the scraper will extract all comments from post pages.
  • sort (string, default: new): Sort order for search results (new, top, hot, relevance, comments).
  • timeframe (string, default: day): Time range for search results (hour, day, week, month, year, all).
  • maxPosts (integer, default: 10000): A hard limit on the number of posts to save before stopping. Does not include comments.
  • maxConcurrency (integer, default: 120): Number of parallel requests. The built-in autoscaler will manage this for you.
  • includeNsfw (boolean, default: false): If true, includes NSFW (over 18) results.

Example Input

This example will scrape a specific subreddit and a specific post, extracting all comments from both.

{
"urls": [
"[https://www.reddit.com/r/socialmedia/](https://www.reddit.com/r/socialmedia/)",
"[https://www.reddit.com/r/technology/comments/1d95j4g/the_state_of_ai_in_2025_a_comprehensive_report/](https://www.reddit.com/r/technology/comments/1d95j4g/the_state_of_ai_in_2025_a_comprehensive_report/)"
],
"scrapeComments": true,
"maxPosts": 50
}
### Output
The dataset will contain two types of items, distinguished by the kind field.
For the : kind: "post"
{
"kind": "post",
"query": "[https://www.reddit.com/r/technology/](https://www.reddit.com/r/technology/)...",
"id": "1d95j4g",
"title": "The State of AI in 2025: A Comprehensive Report",
"body": "This report covers the latest advancements...",
"author": "tech_analyst",
"score": 2451,
"upvote_ratio": 0.95,
"num_comments": 873,
"subreddit": "technology",
"created_utc": "2025-08-05T18:00:00.000Z",
"url": "[https://www.reddit.com/r/technology/comments/1d95j4g/the_state_of_ai_in_2025_a_comprehensive_report/](https://www.reddit.com/r/technology/comments/1d95j4g/the_state_of_ai_in_2025_a_comprehensive_report/)"
}
For the : kind: "comment"
{
"kind": "comment",
"query": "[https://www.reddit.com/r/technology/](https://www.reddit.com/r/technology/)...",
"id": "k5z1x2y",
"postId": "t3_1d95j4g",
"parentId": "t3_1d95j4g",
"body": "Great analysis, but I think you're underestimating the impact of quantum computing on these timelines.",
"author": "future_thinker",
"score": 142,
"created_utc": "2025-08-05T19:15:22.000Z",
"url": "[https://www.reddit.com/r/technology/comments/1d95j4g/the_state_of_ai_in_2025_a_comprehensive_report/k5z1x2y/](https://www.reddit.com/r/technology/comments/1d95j4g/the_state_of_ai_in_2025_a_comprehensive_report/k5z1x2y/)"
}
## Pricing
$1.80 per 1,000 saved items (posts or comments).
All infrastructure and residential proxy costs are bundled in. You only pay for successful results.
## Changelog
2025-08-06 — v2.0.0: Major upgrade. Added URL scraping (posts, subreddits, users), optional comment scraping, and enhanced resiliency.
2025-08-26 — v1.0.0: Initial public release.
Support
Questions or custom needs? Open an issue on Apify, and it will be solved around the clock.
Happy Scrapings!
Fatih