🏯 Tweet Scraper V2 - X / Twitter Scraper avatar
🏯 Tweet Scraper V2 - X / Twitter Scraper

Pricing

from $0.40 / 1,000 tweets

Go to Store
🏯 Tweet Scraper V2 - X / Twitter Scraper

🏯 Tweet Scraper V2 - X / Twitter Scraper

Developed by

API Dojo

API Dojo

Maintained by Community

⚡️ Lightning-fast search, URL, list, and profile scraping, with customizable filters. At $0.40 per 1000 tweets, and 30-80 tweets per second, it is ideal for researchers, entrepreneurs, and businesses! Get comprehensive insights from Twitter (X) now!

3.8 (87)

Pricing

from $0.40 / 1,000 tweets

796

Total users

19K

Monthly users

2.6K

Runs succeeded

>99%

Issues response

4.1 hours

Last modified

8 hours ago

ON

Crawler Finishes Before Requested Amount of Retweets

Closed

onurtuncaybal opened this issue
14 days ago

We would like to download 70k replies for the day to user @grok. Nonetheless, crawler finishes in random numbers. Following is the call example:

{ "customMapFunction": "(object) => { return {...object} }", "end": "2025-07-08", "inReplyTo": "@grok", "includeSearchTerms": false, "maxItems": 76000, "minimumFavorites": 0, "minimumReplies": 0, "minimumRetweets": 0, "onlyImage": false, "onlyQuote": false, "onlyTwitterBlue": false, "onlyVerifiedUsers": false, "onlyVideo": false, "sort": "Latest", "start": "2025-07-07", "tweetLanguage": "en" }

It returns 566 Items, which can not be the amount of replies because the same call also returned 3466 or so replies.

apidojo avatar

Hello,

When you are trying to do advanced filtering, you should use Twitter queries instead of the predefined filters. For your case, you can try something like:

{
"searchTerms": [
"to:grok since:2025-07-07 until:2025-07-08"
]
}

However, when using a scraper, it’s important to keep in mind that it may not fetch all posts/tweets if there are many. Our scrapers visit Twitter, use your query on search and attempt to retrieve as much data as possible, but Twitter can sometimes end pagination prematurely or hide some tweets because they are shadow banned or something similar. Because of this, full data retrieval isn’t guaranteed — the scraper will fetch whatever is accessible at the time.

Please remember, this is a scraper, not an official API, so certain limitations are expected. You can also try this yourself via the web interface, I am attaching a screenshot for your reference.

You can get more information on how to use these queries from our documentation:

I hope this helps.

You can also contact us via Discord if you need more help while using these queries.

Cheers