Reddit Subreddit Members Scraper
Pay $9.00 for 1,000 results
Reddit Subreddit Members Scraper
Pay $9.00 for 1,000 results
Scrape all members of a subreddit. Find the most active and influential users within Reddit communities. Perfect for market research, community analysis, and finding key players in your target niche.
Discover active users and influencers in your target reddit communities. This scraper / API will return a list of users within a given subreddit including how many times they have posted and commented.
✨ Features
- Comprehensive User Data: Collect detailed metrics including:
- User names
- Total post counts from users
- Total comment counts from users
- User profile URLs
- Unique Reddit IDs
- Configurable Depth: Control how many posts & commentsto analyze per subreddit
- Clean JSON Output: Results in easily parseable JSON format
💡 Use Cases
- Market Research: Identify active users and influencers in your target communities
- Community Analysis: Understand user engagement patterns and activity levels
- Influencer Discovery: Find power users and active contributors in specific niches
- Content Strategy: Analyze successful content creators in your field
- Academic Research: Gather data for social media studies and user behavior analysis
📊 Output Format
username
- The username of the useruserId
- The Reddit ID of the useruserUrl
- The URL of the user's profilepostCount
- The number of posts the user has made (max 1,000)commentCount
- The number of comments the user has made (max 1,000)firstActivity
- The timestamp of the user's first activity in the subreddit (epoch unix UTC timestamp = number of seconds since 1970-01-01)lastActivity
- The timestamp of the user's last activity in the subreddit (epoch unix UTC timestamp = number of seconds since 1970-01-01)
1{ 2 "username": "username", 3 "userId": "t2_12345abcd", 4 "userUrl": "https://reddit.com/user/username", 5 "postCount": 5, 6 "commentCount": 17, 7 "firstActivity": 1715769600, 8 "lastActivity": 1715769600 9}
🛠️ Configuration Options
Parameter | Type | Description | Default |
---|---|---|---|
subreddit | String | The subreddit to scrape | |
maxPosts | Number | (optional) Number of posts to analyze per subreddit | 99999999 |
maxComments | Number | (optional) Number of comments to analyze per subreddit | 99999999 |
fetchDetails | Boolean | (optional) Whether to fetch additional details for each user | false |
🚦 Getting Started
- Enter your target subreddit in the input configuration
- Set an optional limit for the number of posts & comments to analyze per subreddit
- Run the actor and get your data in clean JSON format
📈 Benefits
- Save hours of manual data collection
- Get structured, ready-to-analyze data
- Identify key community members effortlessly
- Make data-driven decisions for your Reddit strategy
⚙️ Workings
All most recent posts & comments of a subreddit are scraped. If you only wan to scrape posts or comments, set the maxPosts or maxComments to 0. Only unique users are collected. If you set fetchDetails to true, the actor will also fetch additional details for each user: number of posts, comments, first and last activity within the subreddit. This requires two additional scrape requests for each individual user, which will slow down the scraping process significantly. It is recommended to set fetchDetails to false unless you really need the additional details.
Tip: Set the posts & comments per subreddit limit high. This will ensure that you get a larger list of users and that the results are more accurate. You are only billed for the number of unique users, so no additional cost is incurred when scraping the same user multiple times.
🔒 Privacy & Compliance
This scraper only collects publicly available data.
💪 Why Choose This Scraper?
- Reliable: Built on Apify's robust infrastructure
- Scalable: Handle multiple subreddits efficiently
- Up-to-date: Maintained and updated regularly
- Support: Backed by Apify's excellent customer service
Ready to unlock valuable Reddit user insights? Start scraping now! 🚀
Actor Metrics
1 monthly user
-
1 star
>99% runs succeeded
Created in Jan 2025
Modified 5 days ago