Facebook Video Details Scraper avatar
Facebook Video Details Scraper

Pricing

$9.00/month + usage

Go to Apify Store
Facebook Video Details Scraper

Facebook Video Details Scraper

Developed by

Neuro Scraper

Neuro Scraper

Maintained by Community

Extract public Facebook video details in seconds! Get title, description ๐Ÿ“, upload date ๐Ÿ“…, reactions ๐Ÿ‘โค๏ธ๐Ÿ˜ฎ, shares ๐Ÿ”„, creator info ๐Ÿ‘ค, and video URL ๐ŸŒ. Perfect for research, analytics, and trend tracking. โœ…

0.0 (0)

Pricing

$9.00/month + usage

0

6

6

Last modified

a day ago

๐ŸŽฏ Facebook Video Metadata Scraper

Extract structured metadata from Facebook videos and Reels directly on the Apify platform.


๐Ÿ“– Summary

This Actor takes one or more Facebook video or Reel URLs and produces structured JSON metadata (title, uploader, view counts, upload dates, thumbnails, tags, etc.). Results are pushed to the default Dataset and also consolidated under the ALL_RESULTS key in the Key-Value Store.


๐Ÿ’ก Use cases / When to use

  • Market research and trend analysis on Facebook video performance.
  • Monitoring competitor video engagement (views, likes, comments).
  • Collecting structured data for analytics dashboards.
  • Archiving Facebook video metadata without downloading full media.

โšก Quick Start (Console)

  1. Go to your Actor in Apify Console.

  2. Click Run.

  3. In the Input tab, paste JSON like:

    {
    "startUrls": [
    {"url": "https://www.facebook.com/reel/1234567890123456"}
    ],
    "maxItems": 5
    }
  4. Click Run.

  5. Check the Dataset tab for extracted items, or Key-Value Store for the consolidated ALL_RESULTS file.


โšก Quick Start (CLI + API)

CLI (apify-cli)

$apify run -p input.json

Where input.json contains:

{
"startUrls": [
{"url": "https://www.facebook.com/reel/1234567890123456"}
]
}

API (apify-client in Python)

from apify_client import ApifyClient
client = ApifyClient('<APIFY_TOKEN>')
run = client.actor('<ACTOR_ID>').call(run_input={
"startUrls": [{"url": "https://www.facebook.com/reel/1234567890123456"}],
"maxItems": 5
})
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(item)

๐Ÿ“ Inputs

  • startUrls (array of objects or strings, required) โ€” Facebook video or Reel URLs to process.
  • cookiesFile (string, optional) โ€” Path to uploaded cookies file. Useful for login-required videos.
  • proxyConfiguration (object, optional) โ€” Proxy settings configured in Apify Console.
  • maxItems (integer, optional) โ€” Maximum number of items to scrape.

โš™๏ธ Configuration

๐Ÿ”‘ Name๐Ÿ“ Typeโ“ Requiredโš™๏ธ Default๐Ÿ“Œ Example๐Ÿ“ Notes
startUrlsarrayโœ… Yesnull[ {"url": "https://facebook.com/reel/..."} ]List of URLs to scrape
cookiesFilestringโŒ Nonullcookies.txtUpload via Apify key-value store
proxyConfigurationobjectโŒ No{}{ "useApifyProxy": true }Configure in Console โ†’ Proxy
maxItemsintegerโŒ No0 (no limit)50Limit number of items processed
ALL_RESULTSdatasetAuton/aKey-Value Store entryConsolidated JSON array of all results

โžก๏ธ Example: In Console โ†’ Run โ†’ Input, paste:

{
"startUrls": [ {"url": "https://www.facebook.com/reel/1234567890123456"} ]
}

๐Ÿ“ค Outputs

Each processed video produces a JSON object like:

{
"platform": "facebook",
"webpage_url": "https://www.facebook.com/reel/1234567890123456",
"id": "1234567890123456",
"title": "Sample video",
"duration": "25s",
"upload_date": "16th September 2025",
"timestamp_iso": "2025-09-16T10:30:00Z",
"view_count": "2.3K",
"like_count": 150,
"comment_count": 20,
"uploader": "Page Name",
"thumbnail": "https://...jpg"
}
  • All items โ†’ Default Dataset.
  • Consolidated array โ†’ Key-Value Store entry ALL_RESULTS.

๐Ÿ”‘ Environment variables

  • APIFY_TOKEN โ€” Required for API or CLI usage.
  • HTTP_PROXY / HTTPS_PROXY โ€” Only if custom external proxies are used.

โ–ถ๏ธ How to Run

In Apify Console

  1. Open Actor โ†’ Run.
  2. Paste input JSON into Input.
  3. Run โ†’ View results in Dataset and Key-Value Store.

CLI

$apify call <ACTOR_ID> -p input.json

API

See the Python snippet above under Quick Start.


โฐ Scheduling & Webhooks

  • Configure in Console โ†’ Schedule to run hourly/daily.
  • Add webhooks in Console โ†’ Webhooks to trigger on success/failure.

๐Ÿž Logs & Troubleshooting

  • View run logs in Console โ†’ Runs โ†’ select a run.

  • Common errors:

    • No startUrls provided. โ†’ Ensure you set startUrls.
    • Empty dataset โ†’ The link was invalid or required login without cookies.

๐Ÿ”’ Permissions & Storage notes

  • Output is saved to the Actorโ€™s default Dataset and Key-Value Store.
  • If using cookies, upload them securely in Apify storage.

๐Ÿ†• Changelog / Versioning tip

  • Increment Actor version when input schema or output fields change.

๐Ÿ“Œ Notes / TODOs

  • TODO: Confirm how cookiesFile should be uploaded (default key-value store vs. dataset file input).
  • TODO: Document any rate-limits when scraping high volumes of URLs.

๐ŸŒ Proxy configuration

  • In Console โ†’ Run โ†’ Proxy, enable Apify Proxy with one click.

  • To use your own proxy: In Console โ†’ Actor settings โ†’ Environment variables, set:

    • HTTP_PROXY = http://<USER>:<PASS>@<HOST>:<PORT>
    • HTTPS_PROXY = http://<USER>:<PASS>@<HOST>:<PORT>
  • Always store credentials as secrets in Apify.

  • TODO: Advanced proxy rotation and load balancing patterns can be added later.


๐Ÿ“š References


๐Ÿง What I inferred from main.py

  • Actor requires startUrls as input.
  • Optional inputs: cookiesFile, proxyConfiguration, maxItems.
  • Outputs go to both Dataset and Key-Value Store key ALL_RESULTS.
  • Proxy usage is supported, hence Proxy section included.
  • Marked TODOs for cookie upload method and possible rate limits.