MLAR: Multi-layer Large Language Model-based Robotic Process Automation Applicant Tracking
- URL: http://arxiv.org/abs/2507.10472v1
- Date: Mon, 14 Jul 2025 16:53:19 GMT
- Title: MLAR: Multi-layer Large Language Model-based Robotic Process Automation Applicant Tracking
- Authors: Mohamed T. Younes, Omar Walid, Mai Hassan, Ali Hamdi,
- Abstract summary: This paper introduces an innovative Applicant Tracking System (ATS) enhanced by a novel Robotic process automation (RPA) framework or as further referred to as MLAR.<n>MLAR addresses these challenges employing Large Language Models (LLMs) in three distinct layers: extracting key characteristics from job postings in the first layer, parsing applicant resume to identify education, experience, skills in the second layer, and similarity matching in the third layer.<n>Our approach integrates seamlessly into existing RPA pipelines, automating resume parsing, job matching, and candidate notifications.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This paper introduces an innovative Applicant Tracking System (ATS) enhanced by a novel Robotic process automation (RPA) framework or as further referred to as MLAR. Traditional recruitment processes often encounter bottlenecks in resume screening and candidate shortlisting due to time and resource constraints. MLAR addresses these challenges employing Large Language Models (LLMs) in three distinct layers: extracting key characteristics from job postings in the first layer, parsing applicant resume to identify education, experience, skills in the second layer, and similarity matching in the third layer. These features are then matched through advanced semantic algorithms to identify the best candidates efficiently. Our approach integrates seamlessly into existing RPA pipelines, automating resume parsing, job matching, and candidate notifications. Extensive performance benchmarking shows that MLAR outperforms the leading RPA platforms, including UiPath and Automation Anywhere, in high-volume resume-processing tasks. When processing 2,400 resumes, MLAR achieved an average processing time of 5.4 seconds per resume, reducing processing time by approximately 16.9% compared to Automation Anywhere and 17.1% compared to UiPath. These results highlight the potential of MLAR to transform recruitment workflows by providing an efficient, accurate, and scalable solution tailored to modern hiring needs.
Related papers
- AI Hiring with LLMs: A Context-Aware and Explainable Multi-Agent Framework for Resume Screening [12.845918958645676]
We propose a multi-agent framework for resume screening using Large Language Models (LLMs)<n>The framework consists of four core agents, including a resume extractor, an evaluator, a summarizer, and a score formatter.<n>This dynamic adaptation enables personalized recruitment, bridging the gap between AI automation and talent acquisition.
arXiv Detail & Related papers (2025-04-01T12:56:39Z) - From Text to Talent: A Pipeline for Extracting Insights from Candidate Profiles [44.38380596387969]
This paper proposes a novel pipeline that leverages Large Language Models and graph similarity measures to suggest ideal candidates for specific job openings.<n>Our approach represents candidate profiles as multimodal embeddings, enabling the capture of nuanced relationships between job requirements and candidate attributes.
arXiv Detail & Related papers (2025-03-21T16:18:44Z) - LLM-AutoDiff: Auto-Differentiate Any LLM Workflow [58.56731133392544]
We introduce LLM-AutoDiff: a novel framework for Automatic Prompt Engineering (APE)<n>LLMs-AutoDiff treats each textual input as a trainable parameter and uses a frozen backward engine to generate feedback-akin to textual gradients.<n>It consistently outperforms existing textual gradient baselines in both accuracy and training cost.
arXiv Detail & Related papers (2025-01-28T03:18:48Z) - Forecasting Application Counts in Talent Acquisition Platforms: Harnessing Multimodal Signals using LMs [5.7623855432001445]
We discuss a novel task in the recruitment domain, namely, application count forecasting.
We show that existing auto-regressive based time series forecasting methods perform poorly for this task.
We propose a multimodal LM-based model which fuses job-posting metadata of various modalities through a simple encoder.
arXiv Detail & Related papers (2024-11-19T01:18:32Z) - Multi-agent Path Finding for Timed Tasks using Evolutionary Games [1.3023548510259344]
We show that our algorithm is faster than deep RL methods by at least an order of magnitude.
Our results indicate that it scales better with an increase in the number of agents as compared to other methods.
arXiv Detail & Related papers (2024-11-15T20:10:25Z) - AutoML-Agent: A Multi-Agent LLM Framework for Full-Pipeline AutoML [56.565200973244146]
Automated machine learning (AutoML) accelerates AI development by automating tasks in the development pipeline.<n>Recent works have started exploiting large language models (LLM) to lessen such burden.<n>This paper proposes AutoML-Agent, a novel multi-agent framework tailored for full-pipeline AutoML.
arXiv Detail & Related papers (2024-10-03T20:01:09Z) - Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows? [73.81908518992161]
We introduce Spider2-V, the first multimodal agent benchmark focusing on professional data science and engineering.
Spider2-V features real-world tasks in authentic computer environments and incorporating 20 enterprise-level professional applications.
These tasks evaluate the ability of a multimodal agent to perform data-related tasks by writing code and managing the GUI in enterprise data software systems.
arXiv Detail & Related papers (2024-07-15T17:54:37Z) - APEER: Automatic Prompt Engineering Enhances Large Language Model Reranking [39.649879274238856]
We introduce a novel automatic prompt engineering algorithm named APEER.<n>APEER iteratively generates refined prompts through feedback and preference optimization.<n>We find that the prompts generated by APEER exhibit better transferability across diverse tasks and LLMs.
arXiv Detail & Related papers (2024-06-20T16:11:45Z) - The Foundations of Computational Management: A Systematic Approach to
Task Automation for the Integration of Artificial Intelligence into Existing
Workflows [55.2480439325792]
This article introduces Computational Management, a systematic approach to task automation.
The article offers three easy step-by-step procedures to begin the process of implementing AI within a workflow.
arXiv Detail & Related papers (2024-02-07T01:45:14Z) - Application of LLM Agents in Recruitment: A Novel Framework for Resume Screening [0.0]
This paper introduces a novel Large Language Models (LLMs) based agent framework for resume screening.
Our framework is distinct in its ability to efficiently summarize and grade each resume from a large dataset.
The results demonstrate that our automated resume screening framework is 11 times faster than traditional manual methods.
arXiv Detail & Related papers (2024-01-16T12:30:56Z) - TaskBench: Benchmarking Large Language Models for Task Automation [82.2932794189585]
We introduce TaskBench, a framework to evaluate the capability of large language models (LLMs) in task automation.
Specifically, task decomposition, tool selection, and parameter prediction are assessed.
Our approach combines automated construction with rigorous human verification, ensuring high consistency with human evaluation.
arXiv Detail & Related papers (2023-11-30T18:02:44Z) - ProAgent: From Robotic Process Automation to Agentic Process Automation [87.0555252338361]
Large Language Models (LLMs) have emerged human-like intelligence.
This paper introduces Agentic Process Automation (APA), a groundbreaking automation paradigm using LLM-based agents for advanced automation.
We then instantiate ProAgent, an agent designed to craft from human instructions and make intricate decisions by coordinating specialized agents.
arXiv Detail & Related papers (2023-11-02T14:32:16Z) - Learning Task Automata for Reinforcement Learning using Hidden Markov
Models [37.69303106863453]
This paper proposes a novel pipeline for learning non-Markovian task specifications as succinct finite-state task automata'
We learn a product MDP, a model composed of the specification's automaton and the environment's MDP, by treating the product MDP as a partially observable MDP and using the well-known Baum-Welch algorithm for learning hidden Markov models.
Our learnt task automaton enables the decomposition of a task into its constituent sub-tasks, which improves the rate at which an RL agent can later synthesise an optimal policy.
arXiv Detail & Related papers (2022-08-25T02:58:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.