Facilitating Multi-Role and Multi-Behavior Collaboration of Large Language Models for Online Job Seeking and Recruiting
- URL: http://arxiv.org/abs/2405.18113v1
- Date: Tue, 28 May 2024 12:23:16 GMT
- Title: Facilitating Multi-Role and Multi-Behavior Collaboration of Large Language Models for Online Job Seeking and Recruiting
- Authors: Hongda Sun, Hongzhan Lin, Haiyu Yan, Chen Zhu, Yang Song, Xin Gao, Shuo Shang, Rui Yan,
- Abstract summary: Existing methods rely on modeling the latent semantics of resumes and job descriptions and learning a matching function between them.
Inspired by the powerful role-playing capabilities of Large Language Models (LLMs), we propose to introduce a mock interview process between LLM-played interviewers and candidates.
We propose MockLLM, a novel applicable framework that divides the person-job matching process into two modules: mock interview generation and two-sided evaluation in handshake protocol.
- Score: 51.54907796704785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The emergence of online recruitment services has revolutionized the traditional landscape of job seeking and recruitment, necessitating the development of high-quality industrial applications to improve person-job fitting. Existing methods generally rely on modeling the latent semantics of resumes and job descriptions and learning a matching function between them. Inspired by the powerful role-playing capabilities of Large Language Models (LLMs), we propose to introduce a mock interview process between LLM-played interviewers and candidates. The mock interview conversations can provide additional evidence for candidate evaluation, thereby augmenting traditional person-job fitting based solely on resumes and job descriptions. However, characterizing these two roles in online recruitment still presents several challenges, such as developing the skills to raise interview questions, formulating appropriate answers, and evaluating two-sided fitness. To this end, we propose MockLLM, a novel applicable framework that divides the person-job matching process into two modules: mock interview generation and two-sided evaluation in handshake protocol, jointly enhancing their performance through collaborative behaviors between interviewers and candidates. We design a role-playing framework as a multi-role and multi-behavior paradigm to enable a single LLM agent to effectively behave with multiple functions for both parties. Moreover, we propose reflection memory generation and dynamic prompt modification techniques to refine the behaviors of both sides, enabling continuous optimization of the augmented additional evidence. Extensive experimental results show that MockLLM can achieve the best performance on person-job matching accompanied by high mock interview quality, envisioning its emerging application in real online recruitment in the future.
Related papers
- PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions [62.0123588983514]
Large Language Models (LLMs) have demonstrated wide-ranging applications across various fields.
We reformulate the peer-review process as a multi-turn, long-context dialogue, incorporating distinct roles for authors, reviewers, and decision makers.
We construct a comprehensive dataset containing over 26,841 papers with 92,017 reviews collected from multiple sources.
arXiv Detail & Related papers (2024-06-09T08:24:17Z) - UniRetriever: Multi-task Candidates Selection for Various
Context-Adaptive Conversational Retrieval [47.40553943948673]
We propose a multi-task framework function as a universal retriever for three dominant retrieval tasks during the conversation: persona selection, knowledge selection, and response selection.
To this end, we design a dual-encoder architecture consisting of a context-adaptive dialogue encoder and a candidate encoder.
Experiments and analysis establish state-of-the-art retrieval quality both within and outside its training domain.
arXiv Detail & Related papers (2024-02-26T02:48:43Z) - MAgIC: Investigation of Large Language Model Powered Multi-Agent in
Cognition, Adaptability, Rationality and Collaboration [102.41118020705876]
Large Language Models (LLMs) have marked a significant advancement in the field of natural language processing.
As their applications extend into multi-agent environments, a need has arisen for a comprehensive evaluation framework.
This work introduces a novel benchmarking framework specifically tailored to assess LLMs within multi-agent settings.
arXiv Detail & Related papers (2023-11-14T21:46:27Z) - Exploring Emerging Technologies for Requirements Elicitation Interview
Training: Empirical Assessment of Robotic and Virtual Tutors [0.0]
We propose an architecture for Requirements Elicitation Interview Training system based on emerging educational technologies.
We demonstrate the applicability of REIT through two implementations: Ro with a physical robotic agent and Vo with a virtual voice-only agent.
arXiv Detail & Related papers (2023-04-28T20:03:48Z) - Quality Assurance of Generative Dialog Models in an Evolving
Conversational Agent Used for Swedish Language Practice [59.705062519344]
One proposed solution involves AI-enabled conversational agents for person-centered interactive language practice.
We present results from ongoing action research targeting quality assurance of proprietary generative dialog models trained for virtual job interviews.
arXiv Detail & Related papers (2022-03-29T10:25:13Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z) - Can You Put it All Together: Evaluating Conversational Agents' Ability
to Blend Skills [31.42833993937429]
We investigate ways to combine models trained towards isolated capabilities.
We propose a new dataset, BlendedSkillTalk, to analyze how these capabilities would mesh together in a natural conversation.
Our experiments show that multi-tasking over several tasks that focus on particular capabilities results in better blended conversation performance.
arXiv Detail & Related papers (2020-04-17T20:51:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.