Job Search Agent Langgraph
This Actor is paid per event
Job Search Agent Langgraph
This Actor is paid per event
π― AI Job Search Agent
An autonomous AI agent that helps you find the perfect job match by analyzing your resume and searching across LinkedIn and Indeed. Built with Apify and powered by advanced AI capabilities.
π Features
- π€ Autonomous AI Agent: Intelligently analyzes resumes and job postings
- π― Smart Matching: Uses AI to calculate relevance scores based on multiple factors
- π Multi-Platform Search: Searches both LinkedIn and Indeed for comprehensive results
- π Advanced Scoring: Evaluates jobs based on:
- Skills Match (30%)
- Experience Level (25%)
- Location Match (25%)
- Company/Role Fit (20%)
- π‘ Intelligent Filtering: Removes duplicate listings and irrelevant matches
- π Detailed Analytics: Provides match scores and explanations for each job
π Input Parameters
1{ 2 "resumeText": "Your resume text here", 3 "locationPreference": "Seattle, WA", 4 "workModePreference": "Any", 5 "searchRadius": 25, 6 "minSalary": 0, 7 "maxJobsToScrape": 20 8}
π Output Format
1{ 2 "query": { 3 "resumeSummary": { 4 "desired_role": "Data Analyst", 5 "skills": ["Python", "SQL", "Tableau"], 6 "years_experience": 3, 7 "education": "Master's Degree", 8 "location_preference": "Seattle, WA" 9 }, 10 "searchParameters": { 11 "location": "Seattle, WA", 12 "workMode": "Any" 13 } 14 }, 15 "results": [ 16 { 17 "position": "Senior Data Analyst", 18 "company": "Tech Corp", 19 "location": "Seattle, WA", 20 "matchScore": 85.5, 21 "matchDetails": { 22 "skills_match": {"score": 90, "explanation": "..."}, 23 "experience_match": {"score": 85, "explanation": "..."}, 24 "location_match": {"score": 100, "explanation": "..."}, 25 "company_fit": {"score": 80, "explanation": "..."} 26 }, 27 "applicationUrl": "https://...", 28 "source": "LinkedIn" 29 } 30 ], 31 "statistics": { 32 "totalJobsFound": 20, 33 "averageMatchScore": 82.5, 34 "topSkillsRequested": ["Python", "SQL", "Tableau", "Excel", "PowerBI"], 35 "timestamp": "2024-03-16T17:43:58.800Z" 36 } 37}
π° Pricing
The actor uses a pay-per-event pricing model:
- π Resume Parsing: $0.10
- β Job Scoring: $0.02 per job
- π Results Summary: $0.10
Total cost for processing 10 jobs: $0.40
π οΈ Technical Details
- Built on Apify Platform
- Uses OpenAI GPT models for intelligent analysis
- Implements LangGraph for workflow management
- Integrates with LinkedIn and Indeed scrapers
- Advanced job matching algorithms with weighted scoring
π How It Works
-
π Resume Analysis
- Extracts key information from resume
- Identifies skills, experience, and preferences
-
π Job Search
- Searches LinkedIn and Indeed simultaneously
- Collects detailed job information
- Removes duplicates and irrelevant listings
-
βοΈ Smart Matching
- Scores each job based on multiple criteria
- Uses AI to evaluate job descriptions
- Calculates weighted relevance scores
-
π Results Processing
- Ranks jobs by match score
- Generates detailed match explanations
- Provides comprehensive statistics
π Getting Started
-
Get your API keys:
- Apify API Token
- OpenAI API Key
-
Set up environment variables:
1APIFY_TOKEN=your_token 2OPENAI_API_KEY=your_key
- Run the actor with your resume and preferences
π License
MIT License - feel free to use and modify!
Scrape single-page in Python template
A template for web scraping data from a single web page in Python. The URL of the web page is passed in via input, which is defined by the input schema. The template uses the HTTPX to get the HTML of the page and the Beautiful Soup to parse the data from it. The data are then stored in a dataset where you can easily access them.
The scraped data in this template are page headings but you can easily edit the code to scrape whatever you want from the page.
Included features
- Apify SDK for Python - a toolkit for building Apify Actors and scrapers in Python
- Input schema - define and easily validate a schema for your Actor's input
- Request queue - queues into which you can put the URLs you want to scrape
- Dataset - store structured data where each object stored has the same attributes
- HTTPX - library for making asynchronous HTTP requests in Python
- Beautiful Soup - library for pulling data out of HTML and XML files
How it works
Actor.get_input()
gets the input where the page URL is definedhttpx.AsyncClient().get(url)
fetches the pageBeautifulSoup(response.content, 'lxml')
loads the page data and enables parsing the headings- This parses the headings from the page and here you can edit the code to parse whatever you need from the page
for heading in soup.find_all(["h1", "h2", "h3", "h4", "h5", "h6"]):
Actor.push_data(headings)
stores the headings in the dataset
Resources
- BeautifulSoup Scraper
- Python tutorials in Academy
- Web scraping with Beautiful Soup and Requests
- Beautiful Soup vs. Scrapy for web scraping
- Integration with Make, GitHub, Zapier, Google Drive, and other apps
- Video guide on getting scraped data using Apify API
- A short guide on how to build web scrapers using code templates:
Getting started
For complete information see this article. To run the actor use the following command:
apify run
Deploy to Apify
Connect Git repository to Apify
If you've created a Git repository for the project, you can easily connect to Apify:
- Go to Actor creation page
- Click on Link Git Repository button
Push project on your local machine to Apify
You can also deploy the project on your local machine to Apify without the need for the Git repository.
-
Log in to Apify. You will need to provide your Apify API Token to complete this action.
apify login
-
Deploy your Actor. This command will deploy and build the Actor on the Apify Platform. You can find your newly created Actor under Actors -> My Actors.
apify push
Documentation reference
To learn more about Apify and Actors, take a look at the following resources:
Actor Metrics
1 monthly user
-
0 No bookmarks yet
Created in Mar 2025
Modified a day ago