Lobsters Scraper
3 days trial then $10.00/month - No credit card required now
Lobsters Scraper
3 days trial then $10.00/month - No credit card required now
Scrape Lobste.rs posts and users based on any search criteria. Retrieve all the comments, domains, tags, titles, number of upvotes, and published dates. Use this extremely fast actor to retrieve all the information right away. Easy use and no limits!
Actor - Lobsters Scraper
Lobsters scraper
Since Lobste.rs doesn't provide a good and free API, this actor should help you to retrieve data from it.
The Lobsters data scraper supports the following features:
-
Search any keyword - You can search any keyword you would like to have and get the results
-
Scrape domains - Get all the posts from each of the domains that are represented in lobste.rs.
-
Get posts by tags - Scraping the results by a certain tag is doable!
-
Retrieve user detail - If you are looking for specific user details, you are in the right place.
-
Fetch comments of any post - All the comments that have been shared under a post are also included inside the search results.
-
Get active and recent posts - Don't get outdated! Active and recent posts can be harvested right away from the Lobsters.
Bugs, fixes, updates, and changelog
This scraper is under active development. If you have any feature requests you can create an issue from here.
Upcoming Features
- Integrate hierarchical comment tree structure.
Input Parameters
The input of this scraper should be JSON containing the list of pages on Lobsters that should be visited. Possible fields are:
-
search
: (Optional) (String) Keyword that you want to search on Lobsters. -
startUrls
: (Optional) (Array) List of Lobsters URLs. You should only provide domains, tags, user detail, post detail, active posts, recent posts, or search URLs. -
endPage
: (Optional) (Number) Final number of page that you want to scrape. The default isInfinite
. This applies to allsearch
requests andstartUrls
individually. -
maxItems
: (Optional) (Number) You can limit scraped items. This should be useful when you search through the big lists or search results. -
proxy
: (Required) (Proxy Object) Proxy configuration. -
extendOutputFunction
: (Optional) (String) Function that takes a JQuery handle ($) as an argument and returns an object with data. -
customMapFunction
: (Optional) (String) Function that takes each object's handle as an argument and returns the object with executing the function.
This solution requires the use of Proxy servers, either your own proxy servers or you can use Apify Proxy.
Tip
When you want to scrape over a specific list URL, just copy and paste the link as one of the startUrl.
If you would like to scrape only the first page of a list then put the link for the page and have the endPage
as 1.
With the last approach that is explained above you can also fetch any interval of pages. If you provide the 5th page of a list and define the endPage
parameter as 6 then you'll have the 5th and 6th pages only.
Compute Unit Consumption
The actor is optimized to run blazing fast and scrape as many items as possible. Therefore, it forefronts all the detailed requests. If the actor doesn't block very often it'll scrape 100 listings in 30 seconds with ~0.01-0.02 compute units.
Lobsters Scraper Input example
1{ 2 "startUrls": [ 3 "https://lobste.rs/domains/google.com", 4 "https://lobste.rs/t/devops", 5 "https://lobste.rs/u/lambda", 6 "https://lobste.rs/active", 7 "https://lobste.rs/recent", 8 "https://lobste.rs/search?q=google&what=stories&order=newest" 9 ], 10 "maxItems":10, 11 "endPage":2, 12 "proxy":{ 13 "useApifyProxy":true 14 } 15}
During the Run
During the run, the actor will output messages letting you know what is going on. Each message always contains a short label specifying which page from the provided list is currently specified. When items are loaded from the page, you should see a message about this event with a loaded item count and total item count for each page.
If you provide incorrect input to the actor, it will immediately stop with a failure state and output an explanation of what is wrong.
Lobsters Export
During the run, the actor stores results into a dataset. Each item is a separate item in the dataset.
You can manage the results in any language (Python, PHP, Node JS/NPM). See the FAQ or our API reference to learn more about getting results from this Lobsters actor.
Scraped Lobsters Properties
The structure of each item in Lobsters looks like this:
User Detail
1{ 2 "type": "user", 3 "name": "lambda", 4 "url": "https://lobste.rs/u/lambda", 5 "avatar": "https://lobste.rs/avatars/lambda-100.png", 6 "status": "Active user", 7 "homepage": "https://maxcountryman.com", 8 "github": "https://github.com/maxcountryman", 9 "twitter": "https://twitter.com/MaxCountryman", 10 "about": "Indie hacker and people-first leader. Building https://remotejobs.com in public on Twitter.", 11 "karma": "345, averaging 9.08 per story/comment", 12 "numberOfComments": "10", 13 "numberOfStories": "26, most commonly tagged compsci" 14}
Post Detail
1{ 2 "type": "post", 3 "id": "kour63", 4 "url": "https://lobste.rs/s/kour63/help_test_cargo_s_new_index_protocol", 5 "title": "Help test Cargo's new index protocol", 6 "link": "https://blog.rust-lang.org/inside-rust/2023/01/30/cargo-sparse-protocol.html", 7 "numberOfUpvotes": 13, 8 "userName": "icefox", 9 "userLink": "https://lobste.rs/u/icefox", 10 "domain": "blog.rust-lang.org", 11 "date": "2023-03-09 12:24:32 -0600", 12 "tags": [ 13 "devops", 14 "rust" 15 ], 16 "comments": [ 17 { 18 "id": "dudcdn", 19 "body": "Rust 1.68.0 has been released so this is now usable in stable Rust too. Still opt-in though. https://blog.rust-lang.org/2023/03/09/Rust-1.68.0.html", 20 "numberOfUpvotes": 3, 21 "date": "2023-03-09 16:48:32 -0600", 22 "userLink": "https://lobste.rs/u/wezm", 23 "userName": "wezmlink" 24 } 25 ] 26}
Contact
Please visit us through epctex.com to see all the products that are available for you. If you are looking for any custom integration or so, please reach out to us through the chat box in epctex.com. In need of support? devops@epctex.com is at your service.