AI Job Search Agent avatar
AI Job Search Agent

Under maintenance

Pricing

Pay per event

Go to Store
AI Job Search Agent

AI Job Search Agent

Under maintenance

Developed by

mohamed el hadi msaid

Maintained by Community

5.0 (1)

Pricing

Pay per event

0

Monthly users

5

Runs succeeded

30%

Last modified

17 days ago

.actor/Dockerfile

1# First, specify the base Docker image.
2# You can see the Docker images from Apify at https://hub.docker.com/r/apify/.
3# You can also use any other image from Docker Hub.
4FROM apify/actor-python:3.13
5
6# Second, copy just requirements.txt into the Actor image,
7# since it should be the only file that affects the dependency installation in the next step,
8# in order to speed up the build.
9COPY requirements.txt ./
10
11# Install the packages specified in requirements.txt,
12# print the installed Python version, pip version,
13# and all installed packages with their versions for debugging.
14RUN echo "Python version:" \
15 && python --version \
16 && echo "Pip version:" \
17 && pip --version \
18 && echo "Installing dependencies:" \
19 && pip install -r requirements.txt \
20 && echo "All installed Python packages:" \
21 && pip freeze
22
23# Next, copy the remaining files and directories with the source code.
24# Since we do this after installing the dependencies, quick builds will be really fast
25# for most source file changes.
26COPY . ./
27
28# Use compileall to ensure the runnability of the Actor Python code.
29RUN python3 -m compileall -q .
30
31# Create and run as a non-root user.
32RUN useradd -m apify && \
33    chown -R apify:apify ./
34USER apify
35
36# Specify how to launch the source code of your Actor.
37# By default, the "python3 -m ." command is run.
38CMD ["python3", "-m", "src"]

.actor/actor.json

1{
2	"actorSpecification": 1,
3	"name": "AI-Job-Search-Agent",
4	"title": "Python LangGraph Agent",
5	"description": "LangGraph agent in python",
6	"version": "0.0",
7	"buildTag": "latest",
8	"input": "./input_schema.json",
9	"storages": {
10		"dataset": "./dataset_schema.json"
11	},
12	"meta": {
13		"templateId": "python-langgraph"
14	},
15	"dockerfile": "./Dockerfile"
16}

.actor/dataset_schema.json

1{
2  "actorSpecification": 1,
3  "views": {
4    "overview": {
5      "title": "Overview",
6      "transformation": {
7        "fields": ["response", "structured_response"]
8      },
9      "display": {
10        "component": "table",
11        "properties": {
12          "response": {
13            "label": "Response",
14            "format": "text"
15          },
16          "structured_response": {
17            "label": "Structured Response",
18            "format": "object"
19          }
20        }
21      }
22    }
23  }
24}

.actor/input_schema.json

1{
2  "title": "AI Job Search Agent",
3  "type": "object",
4  "schemaVersion": 1,
5  "properties": {
6    "resume": {
7      "title": "Resume",
8      "type": "string",
9      "description": "The resume text to analyze for job matching",
10      "editor": "textarea",
11      "prefill": "Enter your resume text here..."
12    },
13    "location": {
14      "title": "Location",
15      "type": "string",
16      "description": "Preferred job location (city, state, or 'Remote')",
17      "editor": "textfield",
18      "default": "Remote"
19    },
20    "jobType": {
21      "title": "Job Type",
22      "type": "string",
23      "description": "Type of employment desired",
24      "enum": [
25        "full-time",
26        "part-time",
27        "contract",
28        "internship",
29        "remote"
30      ],
31      "default": "full-time"
32    },
33    "keywords": {
34      "title": "Keywords",
35      "type": "string",
36      "description": "Additional search keywords (comma-separated)",
37      "editor": "textfield",
38      "example": "python, machine learning, data science"
39    },
40    "modelName": {
41      "title": "AI Model",
42      "type": "string",
43      "description": "The OpenAI model to use for analysis",
44      "enum": [
45        "gpt-4o-mini",
46        "gpt-4-turbo",
47        "gpt-3.5-turbo"
48      ],
49      "default": "gpt-4o-mini"
50    }
51  },
52  "required": ["resume"]
53}

.actor/pay_per_event.json

1{
2    "actor-start-gb": {
3        "eventTitle": "Actor start per 1 GB",
4        "eventDescription": "Flat fee for starting an Actor run for each 1 GB of memory.",
5        "eventPriceUsd": 0.005
6    },
7    "openai-100-tokens-gpt-4o": {
8        "eventTitle": "Price per 100 OpenAI tokens for gpt-4o",
9        "eventDescription": "Flat fee for each 100 gpt-4o tokens used.",
10        "eventPriceUsd": 0.001
11    },
12    "openai-100-tokens-gpt-4o-mini": {
13        "eventTitle": "Price per 100 OpenAI tokens for gpt-4o-mini",
14        "eventDescription": "Flat fee for each 100 gpt-4o-mini tokens used.",
15        "eventPriceUsd": 0.00006
16    },
17    "openai-100-tokens-gpt-o1": {
18        "eventTitle": "Price per 100 OpenAI tokens for o1",
19        "eventDescription": "Flat fee for each 100 o1tokens used.",
20        "eventPriceUsd": 0.006
21    },
22    "openai-100-tokens-gpt-o3-mini": {
23        "eventTitle": "Price per 100 OpenAI tokens for o3-mini",
24        "eventDescription": "Flat fee for each 100 o3-mini tokens used.",
25        "eventPriceUsd": 0.00044
26    }
27}

src/__init__.py

src/__main__.py

1import asyncio
2
3from .main import main
4
5# Execute the Actor entry point.
6asyncio.run(main())

src/main.py

1"""This module defines the main entry point for the Apify Actor.
2
3Feel free to modify this file to suit your specific needs.
4
5To build Apify Actors, utilize the Apify SDK toolkit, read more at the official documentation:
6https://docs.apify.com/sdk/python
7"""
8
9from __future__ import annotations
10
11import logging
12import os
13from typing import Dict, Any, List
14import json
15import re
16
17from apify import Actor
18from langchain_openai import ChatOpenAI
19from langchain.agents import AgentExecutor, create_react_agent as create_base_agent, initialize_agent
20from langchain.prompts import PromptTemplate
21from langchain.agents import AgentType
22from pydantic import BaseModel
23
24from src.models import AgentStructuredOutput
25from src.ppe_utils import charge_for_actor_Run
26from src.tools import  tool_linkedin_search, tool_indeed_search, tool_dice_search, analyze_resume
27from src.utils import log_state
28
29os.environ["OPENAI_API_KEY"] = "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"
30
31# fallback input is provided only for testing, you need to delete this line
32fallback_input = {
33    'query': 'This is fallback test query, do not nothing and ignore it.',
34    'modelName': 'gpt-4o-mini',
35}
36
37def setup_react_agent(llm: ChatOpenAI, tools: list, response_format: Any) -> AgentExecutor:
38    """Create a ReAct agent with the given LLM and tools."""
39    
40    prompt = PromptTemplate.from_template("""Answer the following questions as best you can. You have access to the following tools:
41
42{tools}
43
44Use the following format:
45
46Question: the input question you must answer
47Thought: you should always think about what to do
48Action: the action to take, should be one of {tool_names}
49Action Input: the input to the action
50Observation: the result of the action
51... (this Thought/Action/Action Input/Observation can repeat N times)
52Thought: I now know the final answer
53Final Answer: the final answer to the original input question
54
55Begin! Remember to ALWAYS follow the format above - start with Thought, then Action, then Action Input.
56
57Question: {input}
58
59{agent_scratchpad}""")
60    
61    # Create the agent using LangChain's create_react_agent
62    agent = create_base_agent(llm, tools, prompt)
63    
64    return AgentExecutor(
65        agent=agent,
66        tools=tools,
67        verbose=True,
68        handle_parsing_errors=True,
69        max_iterations=6  # Limit the number of iterations to prevent infinite loops
70    )
71
72def format_job_results(jobs: List[Dict[str, Any]]) -> str:
73    """Format job results into a readable report"""
74    if not jobs:
75        return "No jobs found matching your criteria."
76        
77    report = "# Available Job Opportunities\n\n"
78    
79    for i, job in enumerate(jobs, 1):
80        report += f"## {i}. {job['title']}\n"
81        report += f"**Company:** {job['company']}\n"
82        report += f"**Location:** {job['location']}\n"
83        report += f"**Type:** {job['employment_type']}\n"
84        report += f"**Salary:** {job['salary']}\n"
85        report += f"**Posted:** {job['posting_date']}\n"
86        report += f"**Description:** {job['description']}\n"
87        report += f"**Apply here:** {job['url']}\n\n"
88        report += "---\n\n"
89    
90    return report
91
92# Update the agent's system message to enforce strict JSON output
93system_message = """You are a job search assistant. When searching for jobs, you MUST ONLY return a JSON response wrapped in code block markers, with NO OTHER TEXT before or after. Format exactly like this:
94
95```json
96{
97    "summary": {
98        "total_jobs_found": <number>,
99        "skills_matched": ["skill1", "skill2", ...],
100        "experience_years": <number>,
101        "previous_position": "position title"
102    },
103    "jobs": [
104        {
105            "title": "Job Title",
106            "company": "Company Name",
107            "location": "Location",
108            "posting_date": "YYYY-MM-DD",
109            "employment_type": "Full-time/Contract/etc",
110            "salary": "Salary Range",
111            "description": "Brief job description",
112            "url": "Application URL",
113            "is_remote": true/false,
114            "skills_match": ["matched_skill1", "matched_skill2", ...],
115            "match_percentage": 85
116        }
117    ]
118}
119```
120
121CRITICAL RULES:
1221. Return ONLY the JSON code block above - no other text
1232. Always start with ```json and end with ```
1243. Ensure the JSON is valid and properly formatted
1254. Do not include any explanations or thoughts in the output
1265. Fill in all fields, using "Not specified" for missing values
127"""
128
129async def charge_for_actor_start() -> None:
130    # Implement charging logic here
131    pass
132
133async def main() -> None:
134    """Main entry point for the Apify Actor."""
135    async with Actor:
136        # Charge for the start event
137        await charge_for_actor_Run()
138        # Get input
139        actor_input = await Actor.get_input() or fallback_input
140        resume = actor_input.get('resume', '')
141        location = actor_input.get('location', 'Remote')
142        job_type = actor_input.get('jobType', 'full-time')
143        keywords = actor_input.get('keywords', '')
144        model_name=actor_input.get('model_name', '')
145
146        # Initialize the LLM
147        llm = ChatOpenAI(
148            model_name="gpt-3.5-turbo",
149            temperature=0.7,
150            max_tokens=2000
151        )
152
153        # Create the tools list
154        tools = [tool_linkedin_search, tool_indeed_search, tool_dice_search, analyze_resume]
155
156        # Get tool names for the prompt
157        tool_names = [tool.name for tool in tools]
158
159        # Create the agent
160        agent = setup_react_agent(llm, tools, None)
161
162        # Process the query
163        result = await agent.ainvoke(
164            {
165                "input": f"""Find relevant job opportunities based on this resume and preferences:
166Resume:
167{resume}
168
169Job Preferences:
170- Location: {location}
171- Job Type: {job_type}
172- Keywords: {keywords}
173
174Analyze the resume and search for matching jobs. Return a JSON response with:
1751. A brief summary of the search results
1762. An array of relevant jobs found (limit to top 5 most relevant)
1773. Recommended next steps for the job seeker
178
179Format the response as a JSON object with these exact fields:
180{{
181    "summary": "Brief overview of search results",
182    "jobs": [
183        {{
184            "title": "Job title",
185            "company": "Company name",
186            "location": "Job location",
187            "salary": "Salary if available",
188            "match_score": "Relevance score 0-1",
189            "url": "Job posting URL"
190        }}
191    ],
192    "recommendations": ["List of recommended next steps"]
193}}""",
194                "tools": tools,
195                "tool_names": tool_names
196            }
197        )
198
199        # Process and push final results only once
200        try:
201            if isinstance(result, dict) and 'output' in result:
202                output = result['output']
203                
204                # Try to extract JSON from various formats
205                json_data = None
206                
207                # Try direct JSON parsing first
208                if isinstance(output, str):
209                    try:
210                        json_data = json.loads(output)
211                    except json.JSONDecodeError:
212                        # Try extracting from markdown block
213                        json_match = re.search(r'```(?:json)?\s*({\s*".*?})\s*```', output, re.DOTALL)
214                        if json_match:
215                            try:
216                                json_data = json.loads(json_match.group(1).strip())
217                            except json.JSONDecodeError:
218                                pass
219
220                if json_data:
221                    # Validate and clean the data
222                    cleaned_data = {
223                        "summary": json_data.get("summary", "No summary provided"),
224                        "jobs": json_data.get("jobs", [])[:5],  # Limit to top 5 jobs
225                        "recommendations": json_data.get("recommendations", [])
226                    }
227                    await Actor.push_data(cleaned_data)
228                else:
229                    await Actor.push_data({
230                        "error": "Could not parse JSON output",
231                        "raw_output": output
232                    })
233            else:
234                await Actor.push_data({
235                    "error": "Unexpected output format",
236                    "raw_output": str(result)
237                })
238                
239        except Exception as e:
240            Actor.log.error(f"Failed to process results: {str(e)}")
241            await Actor.push_data({
242                "error": f"Failed to process results: {str(e)}",
243                "raw_output": str(result)
244            })
245
246if __name__ == "__main__":
247    Actor.main(main)

src/models.py

1"""This module defines Pydantic models for this project.
2
3These models are used mainly for the structured tool and LLM outputs.
4Resources:
5- https://docs.pydantic.dev/latest/concepts/models/
6"""
7
8from __future__ import annotations
9
10from pydantic import BaseModel, Field
11from typing import List, Optional, Dict
12
13
14class JobPreferences(BaseModel):
15    location: str = Field(..., description="Preferred job location")
16    job_types: List[str] = Field(default=["full-time"], description="Types of employment")
17    salary_range: Optional[Dict[str, float]] = Field(None, description="Desired salary range")
18    remote_preference: str = Field(default="hybrid", description="Remote work preference: 'remote', 'hybrid', 'onsite'")
19    industries: Optional[List[str]] = Field(None, description="Preferred industries")
20    experience_level: str = Field(default="mid-level", description="Experience level: 'entry', 'mid-level', 'senior'")
21
22
23class JobMatch(BaseModel):
24    title: str = Field(..., description="Job title")
25    company: str = Field(..., description="Company name")
26    location: str = Field(..., description="Job location")
27    url: str = Field(..., description="Job posting URL")
28    match_score: float = Field(..., description="Match score between 0 and 1")
29    salary_range: Optional[str] = Field(None, description="Salary range if available")
30    key_requirements: List[str] = Field(default_factory=list, description="Key job requirements")
31    skill_matches: List[str] = Field(default_factory=list, description="Matching skills from resume")
32    missing_skills: List[str] = Field(default_factory=list, description="Required skills not found in resume")
33    job_description: str = Field(..., description="Brief job description")
34    posting_date: Optional[str] = Field(None, description="When the job was posted")
35
36
37class AgentStructuredOutput(BaseModel):
38    """Structured output for the ReAct agent."""
39    matches: List[JobMatch] = Field(..., description="List of matching jobs")
40    summary: str = Field(..., description="Summary of job search results")
41    recommended_actions: List[str] = Field(..., description="Recommended next steps")
42    total_matches: int = Field(..., description="Total number of matches found")
43    average_match_score: float = Field(..., description="Average match score across all jobs")

src/ppe_utils.py

1from decimal import ROUND_CEILING, Decimal
2
3from apify import Actor
4from langchain_core.messages import AIMessage, BaseMessage
5
6
7
8
9async def charge_for_actor_Run() -> None:
10    """Charges for the Actor start event.
11
12    This function calculates the memory usage in gigabytes and charges for the Actor start event accordingly.
13    """
14    await Actor.charge(event_name='actor-run')

src/tools.py

1"""This module defines the tools used by the agent.
2
3Feel free to modify or add new tools to suit your specific needs.
4
5To learn how to create a new tool, see:
6- https://python.langchain.com/docs/concepts/tools/
7- https://python.langchain.com/docs/how_to/#tools
8
9Tools for job searching and resume analysis using various job board scrapers.
10"""
11
12from __future__ import annotations
13
14from typing import List, Dict, Any, Optional, TypedDict
15from pydantic import BaseModel, Field
16
17from apify import Actor
18from langchain_core.tools import Tool
19from langchain_openai import ChatOpenAI
20from langchain.prompts import ChatPromptTemplate
21from langchain_core.output_parsers import JsonOutputParser
22
23from src.models import JobMatch, JobPreferences
24
25
26class JobSearchInput(BaseModel):
27    """Input schema for job search tools."""
28    query: str = Field(..., description="Job title or keywords")
29    location: str = Field(default="Remote", description="Job location")
30
31
32class JobSearchResult(TypedDict):
33    """Standardized job search result format."""
34    title: str
35    company: str
36    location: str
37    posting_date: str
38    employment_type: str
39    salary: str
40    description: str
41    url: str
42    is_remote: bool
43
44
45class ResumeInput(BaseModel):
46    resume_text: str
47
48
49async def base_job_search(
50    query: str,
51    actor_id: str,
52    location: str = "Remote"
53) -> List[JobSearchResult]:
54    """Base function for job searching across different platforms."""
55    try:
56        run_input = {
57            "query": query.split(',')[0].strip(),
58            "location": location if ',' not in query else query.split(',')[1].strip(),
59            "limit": 10
60        }
61        
62        run = await Actor.apify_client.actor(actor_id).call(run_input=run_input)
63        if not run:
64            return []
65            
66        dataset_items = (await Actor.apify_client.dataset(run["defaultDatasetId"]).list_items()).items
67        return format_job_results(dataset_items)
68    except Exception as e:
69        Actor.log.error(f"Job search failed for {actor_id}: {str(e)}")
70        return []
71
72
73def format_job_results(items: List[Dict[str, Any]]) -> List[JobSearchResult]:
74    """Format raw job listings into standardized format."""
75    formatted_jobs = []
76    for job in items:
77        try:
78            formatted_job = JobSearchResult(
79                title=job.get('title', '').strip(),
80                company=job.get('companyName', '').strip(),
81                location=job.get('jobLocation', {}).get('displayName', '').strip(),
82                posting_date=job.get('postedDate', ''),
83                employment_type=job.get('employmentType', ''),
84                salary=job.get('salary', 'Not specified'),
85                description=job.get('summary', '')[:300] + '...' if job.get('summary') else '',  # Limit description length
86                url=job.get('detailsPageUrl', ''),
87                is_remote=job.get('isRemote', False)
88            )
89            formatted_jobs.append(formatted_job)
90        except Exception as e:
91            Actor.log.error(f"Failed to format job listing: {str(e)}")
92            continue
93            
94    return formatted_jobs[:5]  # Limit to top 5 results
95
96
97async def _linkedin_search(query: str) -> List[JobSearchResult]:
98    """Search for jobs on LinkedIn."""
99    return await base_job_search(query, "bebity/linkedin-jobs-scraper")
100
101
102# Create LinkedIn search tool
103tool_linkedin_search = Tool(
104    name="search_linkedin_jobs",
105    description="Search for jobs on LinkedIn. Input format: 'job title, location'",
106    func=_linkedin_search,
107    coroutine=_linkedin_search
108)
109
110
111async def _indeed_search(query: str) -> List[JobSearchResult]:
112    """Search for jobs on Indeed."""
113    return await base_job_search(query, "curious_coder/indeed-scraper")
114
115
116# Create Indeed search tool
117tool_indeed_search = Tool(
118    name="search_indeed_jobs",
119    description="Search for jobs on Indeed. Input format: 'job title, location'",
120    func=_indeed_search,
121    coroutine=_indeed_search
122)
123
124
125async def _dice_search(query: str) -> List[JobSearchResult]:
126    """Search for jobs on Dice."""
127    return await base_job_search(query, "mohamedgb00714/dicecom-job-scraper")
128
129
130# Create Dice search tool
131tool_dice_search = Tool(
132    name="search_dice_jobs",
133    description="Search for jobs on Dice. Input format: 'job title, location'",
134    func=_dice_search,
135    coroutine=_dice_search
136)
137
138
139async def _analyze_resume(resume_text: str) -> Dict[str, Any]:
140    """Analyze a resume to extract key information."""
141    if not resume_text.strip():
142        return {
143            "error": "Empty resume text provided",
144            "skills": [], "experience": [], "education": [],
145            "summary": "No resume to analyze", "years_experience": 0
146        }
147
148    try:
149        llm = ChatOpenAI(temperature=0)
150        output_parser = JsonOutputParser()
151        
152        prompt = ChatPromptTemplate.from_template(
153            """Analyze this resume and extract key information. Return ONLY a JSON object:
154            
155            Resume: {resume_text}
156            
157            Format: {format_instructions}
158            """
159        )
160        
161        chain = prompt | llm | output_parser
162        
163        analysis = await chain.ainvoke({
164            "resume_text": resume_text,
165            "format_instructions": output_parser.get_format_instructions()
166        })
167        
168        return {**analysis, "raw_text": resume_text}
169        
170    except Exception as e:
171        Actor.log.error(f"Resume analysis failed: {str(e)}")
172        return {
173            "error": str(e),
174            "skills": [], "experience": [], "education": [],
175            "summary": "Analysis failed", "years_experience": 0,
176            "raw_text": resume_text
177        }
178
179
180# Create analyze_resume tool
181analyze_resume = Tool(
182    name="analyze_resume",
183    description="Analyze a resume to extract skills, experience, and other key information.",
184    func=_analyze_resume,
185    coroutine=_analyze_resume
186)

src/utils.py

1from apify import Actor
2from langchain_core.messages import ToolMessage
3
4
5def log_state(state: dict) -> None:
6    """Logs the state of the graph.
7
8    Uses the `Actor.log.debug` method to log the state of the graph.
9
10    Args:
11        state (dict): The state of the graph.
12    """
13    message = state['messages'][-1]
14    # Traverse all tool messages and print them
15    # if multiple tools are called in parallel
16    if isinstance(message, ToolMessage):
17        # Until the analyst message with tool_calls
18        for _message in state['messages'][::-1]:
19            if hasattr(_message, 'tool_calls'):
20                break
21            Actor.log.debug('-------- Tool Result --------')
22            Actor.log.debug('Tool: %s', _message.name)
23            Actor.log.debug('Result: %s', _message.content)
24
25    Actor.log.debug('-------- Message --------')
26    Actor.log.debug('Message: %s', message)
27
28    # Print all tool calls
29    if hasattr(message, 'tool_calls'):
30        for tool_call in getattr(message, 'tool_calls', []):
31            Actor.log.debug('-------- Tool Call --------')
32            Actor.log.debug('Tool: %s', tool_call['name'])
33            Actor.log.debug('Args: %s', tool_call['args'])

.dockerignore

1.git
2.mise.toml
3.nvim.lua
4storage
5
6# The rest is copied from https://github.com/github/gitignore/blob/main/Python.gitignore
7
8# Byte-compiled / optimized / DLL files
9__pycache__/
10*.py[cod]
11*$py.class
12
13# C extensions
14*.so
15
16# Distribution / packaging
17.Python
18build/
19develop-eggs/
20dist/
21downloads/
22eggs/
23.eggs/
24lib/
25lib64/
26parts/
27sdist/
28var/
29wheels/
30share/python-wheels/
31*.egg-info/
32.installed.cfg
33*.egg
34MANIFEST
35
36# PyInstaller
37#  Usually these files are written by a python script from a template
38#  before PyInstaller builds the exe, so as to inject date/other infos into it.
39*.manifest
40*.spec
41
42# Installer logs
43pip-log.txt
44pip-delete-this-directory.txt
45
46# Unit test / coverage reports
47htmlcov/
48.tox/
49.nox/
50.coverage
51.coverage.*
52.cache
53nosetests.xml
54coverage.xml
55*.cover
56*.py,cover
57.hypothesis/
58.pytest_cache/
59cover/
60
61# Translations
62*.mo
63*.pot
64
65# Django stuff:
66*.log
67local_settings.py
68db.sqlite3
69db.sqlite3-journal
70
71# Flask stuff:
72instance/
73.webassets-cache
74
75# Scrapy stuff:
76.scrapy
77
78# Sphinx documentation
79docs/_build/
80
81# PyBuilder
82.pybuilder/
83target/
84
85# Jupyter Notebook
86.ipynb_checkpoints
87
88# IPython
89profile_default/
90ipython_config.py
91
92# pyenv
93#   For a library or package, you might want to ignore these files since the code is
94#   intended to run in multiple environments; otherwise, check them in:
95.python-version
96
97# pdm
98#   Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
99#pdm.lock
100#   pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
101#   in version control.
102#   https://pdm.fming.dev/latest/usage/project/#working-with-version-control
103.pdm.toml
104.pdm-python
105.pdm-build/
106
107# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
108__pypackages__/
109
110# Celery stuff
111celerybeat-schedule
112celerybeat.pid
113
114# SageMath parsed files
115*.sage.py
116
117# Environments
118.env
119.venv
120env/
121venv/
122ENV/
123env.bak/
124venv.bak/
125
126# Spyder project settings
127.spyderproject
128.spyproject
129
130# Rope project settings
131.ropeproject
132
133# mkdocs documentation
134/site
135
136# mypy
137.mypy_cache/
138.dmypy.json
139dmypy.json
140
141# Pyre type checker
142.pyre/
143
144# pytype static type analyzer
145.pytype/
146
147# Cython debug symbols
148cython_debug/
149
150# PyCharm
151#  JetBrains specific template is maintained in a separate JetBrains.gitignore that can
152#  be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
153#  and can be added to the global gitignore or merged into this file.  For a more nuclear
154#  option (not recommended) you can uncomment the following to ignore the entire idea folder.
155.idea/

.gitignore

1.mise.toml
2.nvim.lua
3storage
4
5# The rest is copied from https://github.com/github/gitignore/blob/main/Python.gitignore
6
7# Byte-compiled / optimized / DLL files
8__pycache__/
9*.py[cod]
10*$py.class
11
12# C extensions
13*.so
14
15# Distribution / packaging
16.Python
17build/
18develop-eggs/
19dist/
20downloads/
21eggs/
22.eggs/
23lib/
24lib64/
25parts/
26sdist/
27var/
28wheels/
29share/python-wheels/
30*.egg-info/
31.installed.cfg
32*.egg
33MANIFEST
34
35# PyInstaller
36#  Usually these files are written by a python script from a template
37#  before PyInstaller builds the exe, so as to inject date/other infos into it.
38*.manifest
39*.spec
40
41# Installer logs
42pip-log.txt
43pip-delete-this-directory.txt
44
45# Unit test / coverage reports
46htmlcov/
47.tox/
48.nox/
49.coverage
50.coverage.*
51.cache
52nosetests.xml
53coverage.xml
54*.cover
55*.py,cover
56.hypothesis/
57.pytest_cache/
58cover/
59
60# Translations
61*.mo
62*.pot
63
64# Django stuff:
65*.log
66local_settings.py
67db.sqlite3
68db.sqlite3-journal
69
70# Flask stuff:
71instance/
72.webassets-cache
73
74# Scrapy stuff:
75.scrapy
76
77# Sphinx documentation
78docs/_build/
79
80# PyBuilder
81.pybuilder/
82target/
83
84# Jupyter Notebook
85.ipynb_checkpoints
86
87# IPython
88profile_default/
89ipython_config.py
90
91# pyenv
92#   For a library or package, you might want to ignore these files since the code is
93#   intended to run in multiple environments; otherwise, check them in:
94.python-version
95
96# pdm
97#   Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
98#pdm.lock
99#   pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
100#   in version control.
101#   https://pdm.fming.dev/latest/usage/project/#working-with-version-control
102.pdm.toml
103.pdm-python
104.pdm-build/
105
106# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
107__pypackages__/
108
109# Celery stuff
110celerybeat-schedule
111celerybeat.pid
112
113# SageMath parsed files
114*.sage.py
115
116# Environments
117.env
118.venv
119env/
120venv/
121ENV/
122env.bak/
123venv.bak/
124
125# Spyder project settings
126.spyderproject
127.spyproject
128
129# Rope project settings
130.ropeproject
131
132# mkdocs documentation
133/site
134
135# mypy
136.mypy_cache/
137.dmypy.json
138dmypy.json
139
140# Pyre type checker
141.pyre/
142
143# pytype static type analyzer
144.pytype/
145
146# Cython debug symbols
147cython_debug/
148
149# PyCharm
150#  JetBrains specific template is maintained in a separate JetBrains.gitignore that can
151#  be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
152#  and can be added to the global gitignore or merged into this file.  For a more nuclear
153#  option (not recommended) you can uncomment the following to ignore the entire idea folder.
154.idea/
155
156# Added by Apify CLI
157node_modules

input.json

1{
2    "query": "please find me a job related to my resume",
3    "resume": "I am a software engineer with 5 years of experience in Python and web development. Skills include: Python, JavaScript, React, Node.js, AWS, Docker, and CI/CD. Previously worked at Tech Corp as Senior Developer leading a team of 5 engineers...",
4    "preferences": {
5        "location": "new york",
6        "job_types": ["full-time"],
7        "remote_preference": "remote",
8        "industries": ["technology", "software"],
9        "experience_level": "mid-level",
10        "salary_range": {
11            "min": 100000,
12            "max": 150000
13        }
14    },
15    "modelName": "gpt-4o-mini"
16}

input2.json

1{
2    "query": "find me a data science position in San Francisco",
3    "resume": "I am a data scientist with 3 years of experience specializing in machine learning and AI. Proficient in Python, TensorFlow, PyTorch, and scikit-learn. Experience with big data technologies like Spark and Hadoop. Previously worked at AI Solutions Inc as ML Engineer developing predictive models...",
4    "preferences": {
5        "location": "san francisco",
6        "job_types": ["full-time"],
7        "remote_preference": "hybrid",
8        "industries": ["technology", "artificial intelligence"],
9        "experience_level": "mid-level",
10        "salary_range": {
11            "min": 120000,
12            "max": 180000
13        }
14    },
15    "modelName": "gpt-4o-mini"
16}

requirements.txt

1apify<3.0.0
2langchain-openai==0.3.6
3langgraph==0.2.73
4aiohttp>=3.8.0
5beautifulsoup4>=4.12.0
6langchain>=0.1.0
7pydantic>=2.0.0
8langchain-core>=0.1.0
9langchain-openai>=0.0.5

Pricing

Pricing model

Pay per event 

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

full process

$10.00

all process geting data form other actors run api keys