
Website Contact & Socials Extractor
Pricing
$10.00/month + usage

Website Contact & Socials Extractor
Crawl company websites and extract emails, phone numbers and links to Discord, Facebook, Instagram, LinkedIn, Pinterest, Reddit, Snapchat, Telegram, TikTok, Twitch, Twitter/X and YouTube. 2 hour trial available.
5.0 (1)
Pricing
$10.00/month + usage
11
Total users
266
Monthly users
90
Runs succeeded
92%
Issues response
6.4 hours
Last modified
2 days ago
Large difference in price for two different runs
Open
$3.573 for 79 results
https://console.apify.com/actors/runs/I8fdNSFdcqRHAzFyM#output
$3.066 for 5012
https://console.apify.com/actors/runs/btgBjeKryjuXK0RIk#output
Also when I add in full website address,
like
it seems to remove the https:// and in some cases the full
Then I cannot match results on export.

Hi spr123, Thank you for bringing this up, we will investigate your case.

We noticed that Apify does not provide detailed cost information for your runs. The cost difference is unclear, especially since:
- Both runs appear to use the same hardware settings
- Proxy configurations are identical
- The more expensive run was shorter, made fewer requests, and used less storage
Could you please open the run pages and send us screenshots of the "Usage" section? Here's an example showing where to find the cost breakdown.

Regarding the behavior that strips full website address, it is by design to deduplicate the different URLs pointing at the same domain.
However we understand the importance of linking the input URL to the output domain and we will look for the appropriate solution for your case.
spr123
RESULTS 79 REQUESTS 331 of 4.5K handled Usage $3.573 STARTED 2025-07-29 21:01 DURATION 20 m 12 s
This is the platform usage of the Actor run. Note that this view shows the usage with your current prices, and doesn't include legacy volume discounts, compute units included in your plan, or timed storage. Learn more
Actor compute units 1.3466 $0.539 Dataset reads 0 $0.000 Dataset writes 8,787 $0.044 Key-value store reads 907 $0.005 Key-value store writes 18,578 $0.929 Key-value store lists 0 $0.000 Request queue reads 13,441 $0.054 Request queue writes 95,075 $1.902 Data transfer internal 1.04 GB $0.052 Data transfer external 0.25 GB $0.050 Proxy residential data transfer 0.00 GB $0.000 Proxy SERPs 0 $0.000
spr123
RESULTS 5012 REQUESTS 6.7K of 6.7K handled Usage $3.066 STARTED 2025-07-29 20:27 DURATION 27 m 22 s
This is the platform usage of the Actor run. Note that this view shows the usage with your current prices, and doesn't include legacy volume discounts, compute units included in your plan, or timed storage. Learn more
Actor compute units 1.8252 $0.730 Dataset reads 0 $0.000 Dataset writes 11,675 $0.058 Key-value store reads 7,028 $0.035 Key-value store writes 28,527 $1.426 Key-value store lists 0 $0.000 Request queue reads 7,027 $0.028 Request queue writes 34,602 $0.692 Data transfer internal 0.41 GB $0.020 Data transfer external 0.38 GB $0.075 Proxy residential data transfer 0.00 GB $0.000 Proxy SERPs 0 $0.000

Thank you for providing details, we are investigating your case now.

Here is a side-by-side comparison of the two actor runs (table attached via screenshot). Most billing metrics behave as expected: Run #1 has shorter runtime, fewer results, and lower costs across most categories compared to Run #2.
The exception is “Queue writes.” Run #1 shows a 2.75× higher cost in queue writes, despite producing fewer results. This metric reflects how many URLs were submitted to the queue during the crawl.
Our assessment: Run #1 likely included domains with significantly more internal links, causing the actor to enqueue many more URLs. This inflates queue operations and leads to higher cost even if few pages were actually processed.
To achieve more predictable and uniform cost per run, we recommend to try:
- Limiting the maximum number of pages per domain to 10 ("Maximum pages per domain" Input tab)
- Reducing crawl depth to 3 ("Maximum crawl depth" Input tab)
These adjustments are supposed to cap the number of discovered links and prevent runaway queue growth on heavily interlinked sites. Let us know if that improves your runs.
Aside from these changes, you can also try reducing actor memory from 4096MB down to 2048MB, given that in your run actor did not use more than 1.2 GB at a time. This should lower the cost even further.
Let us know if that helps.

We’ve released an update to the actor based on your request to link the output domains with the original input URLs.
The resulting dataset now includes a new column: start_urls. This column lists the URLs the actor used to start crawling each domain, including the exact URLs you provided in the input.
Let us know if this covers your needs or if you’d like further adjustments.
spr123
Hi
I am seeing other issues, for example
https://console.apify.com/actors/runs/b8zVWYEO8D5VXA7K8#output
Its not picking up the correct youtube url
Correct URL is
https://www.youtube.com/channel/UCT5wfV2gu5oelksNzVkvcgQ
you are scraping
https://www.youtube.com/channel/uct5wfv2gu5oelksnzvkvcgq
Am I going to get a refund for all these wasted runs ?
Kind Regards
Scott

Hi Scott,
Thank you for pointing out the issue with the YouTube URL. We've deployed a fix, and the correct link should now be extracted properly for www.wickey.ie.
Regarding a refund, the cost of a run goes toward Apify's infrastructure and does not go to us directly. We don’t manage billing or refunds, so you’ll need to contact Apify support for any refund-related matters.
If you come across any other issues with the actor, feel free to share them with us. We're happy to investigate and fix them promptly.
spr123
Hi,
It failed when I tried to test it again
https://console.apify.com/actors/runs/ZzJsLKG5NhKxoJnqx#output

Thank you for sharing the details, we confirm that we have the same issue when making our run with the same inputs however we cant surely say what is causing the issue right away.
We tried running the same inputs locally on our machine and everything worked smoothly, looks like the issue is inside the Apify environment executing our crawler. We will debug this and let you know the results.