Bloxy Tokens Info avatar
Bloxy Tokens Info

Pricing

Pay per usage

Go to Store
Bloxy Tokens Info

Bloxy Tokens Info

Developed by

Zhen

Maintained by Community

Get all tokens name and address contract from Bloxy

0.0 (0)

Pricing

Pay per usage

1

Monthly users

1

Runs succeeded

>99%

Last modified

3 years ago

Dockerfile

1# This is a template for a Dockerfile used to run acts in Actor system.
2# The base image name below is set during the act build, based on user settings.
3# IMPORTANT: The base image must set a correct working directory, such as /usr/src/app or /home/user
4FROM apify/actor-node-puppeteer-chrome
5
6# Second, copy just package.json and package-lock.json since it should be
7# the only file that affects "npm install" in the next step, to speed up the build
8COPY package*.json ./
9
10# Install NPM packages, skip optional and development dependencies to
11# keep the image small. Avoid logging too much and print the dependency
12# tree for debugging
13RUN npm --quiet set progress=false \
14 && npm install --only=prod --no-optional \
15 && echo "Installed NPM packages:" \
16 && (npm list --all || true) \
17 && echo "Node.js version:" \
18 && node --version \
19 && echo "NPM version:" \
20 && npm --version
21
22# Copy source code to container
23# Do this in the last step, to have fast build if only the source code changed
24COPY --chown=myuser:myuser . ./
25
26# NOTE: The CMD is already defined by the base image.
27# Uncomment this for local node inspector debugging:
28# CMD [ "node", "--inspect=0.0.0.0:9229", "main.js" ]

package.json

1{
2    "name": "apify-project",
3    "version": "0.0.1",
4    "description": "",
5    "author": "It's not you it's me",
6    "license": "ISC",
7    "dependencies": {
8        "apify": "latest"
9    },
10    "scripts": {
11        "start": "node main.js"
12    }
13}

main.js

1const Apify = require('apify');
2
3Apify.main(async () => {
4    // Apify.openRequestQueue() creates a preconfigured RequestQueue instance.
5    // We add our first request to it - the initial page the crawler will visit.
6    const requestQueue = await Apify.openRequestQueue();
7    await requestQueue.addRequest({ url: 'https://bloxy.info/list_tokens/ERC20?page=1' });
8    // Create an instance of the PlaywrightCrawler class - a crawler
9    // that automatically loads the URLs in headless Chrome / Playwright.
10    const input = await Apify.getInput();
11    const crawler = new Apify.PuppeteerCrawler({
12        requestQueue,
13        launchContext: {
14            // Here you can set options that are passed to the playwright .launch() function.
15            launchOptions: {
16                headless: true,
17            },
18        },
19
20        // Stop crawling after several pages
21        maxRequestsPerCrawl: input.maxPage || 50,
22        handlePageTimeoutSecs: 30,
23
24        handlePageFunction: async ({ request, page }) => {
25            console.log(`Processing ${request.url}...`);
26
27            // A function to be evaluated by Playwright within the browser context.
28            const data = await page.$$eval('tbody tr', $posts => {
29                const scrapedData = [];
30                // // We're getting the title, rank and URL of each post on Hacker News.
31                $posts.forEach($post => {
32                    let linkContract = $post.querySelector('td:nth-child(1) a').href
33                    let contract = linkContract.split('token_holders/')[1]
34                    scrapedData.push({
35                        title: $post.querySelector('td:nth-child(1)').textContent.trim(),
36                        contract,
37                        type: $post.querySelector('td:nth-child(2)').textContent.trim(),
38                        senders_receivers: $post.querySelector('td:nth-child(3)').textContent.trim(),
39                        transfers7days: $post.querySelector('td:nth-child(4)').textContent.trim(),
40                        volume7days: $post.querySelector('td:nth-child(5)').textContent.trim(),
41                        topTransferTx: $post.querySelector('td:nth-child(6)').textContent.trim(),
42                    });
43                });
44
45                return scrapedData;
46            });
47            console.log(data);
48            // Store the results to the default dataset.
49            await Apify.pushData(data);
50
51            // Find a link to the next page and enqueue it if it exists.
52            const infos = await Apify.utils.enqueueLinks({
53                page,
54                requestQueue,
55                selector: '.next_page a',
56            });
57
58            if (infos.length === 0) console.log(`${request.url} is the last page!`);
59        },
60
61        // This function is called if the page processing failed more than maxRequestRetries+1 times.
62        handleFailedRequestFunction: async ({ request }) => {
63            console.log(`Request ${request.url} failed too many times.`);
64        },
65    });
66
67    // Run the crawler and wait for it to finish.
68    await crawler.run();
69
70    console.log('Crawler finished.');
71});

Pricing

Pricing model

Pay per usage

This Actor is paid per platform usage. The Actor is free to use, and you only pay for the Apify platform usage.