
Tweet Scraper|$0.25/1K Tweets | Pay-Per Result | No Rate Limits
Pricing
$0.25 / 1,000 tweets

Tweet Scraper|$0.25/1K Tweets | Pay-Per Result | No Rate Limits
Only $0.25/1000 tweets for Twitter scraping, 100% reliability, swift data retrieval.This incredible low price is almost too good to be true.Thanks to our large-scale operations and efficient servers, we can offer you rock-bottom prices that no competitors can match. Don't miss this opportunity !
2.3 (24)
Pricing
$0.25 / 1,000 tweets
194
Total users
3.9K
Monthly users
1K
Runs succeeded
>99%
Issues response
4.5 hours
Last modified
16 days ago
Crawling just stops along the line
Closed
The scraper just stops when trying to crawl tweets. After going through the logs, I saw this error log below. Don't know exactly what the problem is and the solution
Error log: 2025-07-27T12:30:59.191Z {"errors":[{"message":"Dependency: Unspecified","locations":[{"line":3900,"column":7}],"path":["search_by_raw_query","search_timeline","timeline"],"extensions":{"name":"DependencyError","source":"Server","retry_after":0,"code":0,"kind":"Operational","tracing":{"trace_id":"abf3dbcc94e8f5c5"}},"code":0,"kind":"Operational","name":"DependencyError","source":"Server","retry_after":0,"tracing":{"trace_id":"abf3dbcc94e8f5c5"}}],"data":{"search_by_raw_query":{"id":"U2VhcmNoUXVlcnk6bGFuZzplbiBmcm9tOkZPWDQgc2luY2U6MjAyMi0xMS0wMV8wMDowMDowMF9VVEMgdW50aWw6MjAyNS0wNi0zMF8wMDowMDowMF9VVEM=","rest_id":"lang:en from:FOX4 since:2022-11-01_00:00:00_UTC until:2025-06-30_00:00:00_UTC","search_timeline":{"id":"VGltZWxpbmU6DAB+CwABAAAATWxhbmc6ZW4gZnJvbTpGT1g0IHNpbmNlOjIwMjItMTEtMDFfMDA6MDA6MDBfVVRDIHVudGlsOjIwMjUtMDYtMzBfMDA6MDA6MDBfVVRDCAACAAAAAQAA"}}}} 2025-07-27T12:30:59.193Z has_TimelineAddEntries_in_instructions or no cursor,will break.query is lang:en from:FOX4 since:2022-11-01_00:00:00_UTC until:2025-06-30_00:00:00_UTC,cursor is None 2025-07-27T12:30:59.195Z mock_tweets's len is 0 2025-07-27T12:31:02.193Z log fail error 2025-07-27T12:31:02.196Z [apify] INFO Exiting Actor ({"exit_code": 0})

Hello thanks for feedback. We will have a look
eben
Alright. I hope the fix doesn't take too long. Thanks

It's also based on our observation that there was an abnormal network fluctuation. You can retry your task again. The points should be fine because this kind of issue occurs very, very rarely among our other clients.
eben
Oh I see. I'm doing a long running scraping. Will setting "No timeout" make the actor not stop abruptly like this when an abnormal network fluctuation like this happens?
eben
I still got a similar error after this recent retry:
2025-07-28T07:28:43.616Z {"errors":[{"message":"Dependency: Unspecified","locations":[{"line":3900,"column":7}],"path":["search_by_raw_query","search_timeline","timeline"],"extensions":{"name":"DependencyError","source":"Server","retry_after":0,"code":0,"kind":"Operational","tracing":{"trace_id":"60c369a9fe24e388"}},"code":0,"kind":"Operational","name":"DependencyError","source":"Server","retry_after":0,"tracing":{"trace_id":"60c369a9fe24e388"}}],"data":{"search_by_raw_query":{"id":"U2VhcmNoUXVlcnk6bGFuZzplbiBmcm9tOkZPWDQgc2luY2U6MjAyMi0xMS0wMV8wMDowMDowMF9VVEMgdW50aWw6MjAyNC0wMy0zMF8wMDowMDowMF9VVEM=","rest_id":"lang:en from:FOX4 since:2022-11-01_00:00:00_UTC until:2024-03-30_00:00:00_UTC","search_timeline":{"id":"VGltZWxpbmU6DAB+CwABAAAATWxhbmc6ZW4gZnJvbTpGT1g0IHNpbmNlOjIwMjItMTEtMDFfMDA6MDA6MDBfVVRDIHVudGlsOjIwMjQtMDMtMzBfMDA6MDA6MDBfVVRDCAACAAAAAQAA"}}}} 2025-07-28T07:28:43.618Z has_TimelineAddEntries_in_instructions or no cursor,will break.query is lang:en from:FOX4 since:2022-11-01_00:00:00_UTC until:2024-03-30_00:00:00_UTC,cursor is None 2025-07-28T07:28:43.620Z mock_tweets's len is 0 2025-07-28T07:28:46.620Z log fail error 2025-07-28T07:28:46.622Z [apify] INFO Exiting Actor ({"exit_code": 0})

Please try using the Search Terms field. Split a long time period into multiple time segments, ensuring that the number of tweets in each segment is less than 800.
Search Terms demo: [ "lang:en from:FOX4 since:2022-11-01_00:00:00_UTC until:2022-11-30_00:00:00_UTC", "lang:en from:FOX4 since:2022-12-01_00:00:00_UTC until:2022-12-30_00:00:00_UTC", .....
]
eben
Okay. Thanks.
This makes sense. I was previously just interacting with the actors on the web and collecting the json results.
I'll probably need to use the API for this method. I tried it and it works.
I have a question though:
Will it be fine if I append all search term in a single run without splitting them into different runs? for example, if I want to cover tweets from 2022-11-01_00:00:00_UTC to 2025-01-01_00:00:00_UTC
Will it fine to just append all of the time segment in a single run like this: [ "lang:en from:FOX4 since:2022-11-01_00:00:00_UTC until:2022-11-30_00:00:00_UTC", "lang:en from:FOX4 since:2022-12-01_00:00:00_UTC until:2022-12-30_00:00:00_UTC", ..., ..., "lang:en from:FOX4 since:2025-06-18_00:00:00_UTC until:2025-06-30_00:00:00_UTC", ]
OR I need to split these and fire off a separate run?
Thanks

It only needs to be run once because you've set the searchTerms field.
eben
Okay. I discover I can still do everything on the web. Makes sense.
Thanks