Generative Job Recommendations with Large Language Model
- URL: http://arxiv.org/abs/2307.02157v1
- Date: Wed, 5 Jul 2023 09:58:08 GMT
- Title: Generative Job Recommendations with Large Language Model
- Authors: Zhi Zheng, Zhaopeng Qiu, Xiao Hu, Likang Wu, Hengshu Zhu, Hui Xiong
- Abstract summary: GIRL (GeneratIve job Recommendation based on Large language models) is a novel approach inspired by recent advancements in the field of Large Language Models (LLMs)
We employ a Supervised Fine-Tuning (SFT) strategy to instruct the LLM-based generator in crafting suitable Job Descriptions (JDs) based on the Curriculum Vitae (CV) of a job seeker.
In particular, GIRL serves as a job seeker-centric generative model, providing job suggestions without the need of a candidate set.
- Score: 32.99532175346021
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid development of online recruitment services has encouraged the
utilization of recommender systems to streamline the job seeking process.
Predominantly, current job recommendations deploy either collaborative
filtering or person-job matching strategies. However, these models tend to
operate as "black-box" systems and lack the capacity to offer explainable
guidance to job seekers. Moreover, conventional matching-based recommendation
methods are limited to retrieving and ranking existing jobs in the database,
restricting their potential as comprehensive career AI advisors. To this end,
here we present GIRL (GeneratIve job Recommendation based on Large language
models), a novel approach inspired by recent advancements in the field of Large
Language Models (LLMs). We initially employ a Supervised Fine-Tuning (SFT)
strategy to instruct the LLM-based generator in crafting suitable Job
Descriptions (JDs) based on the Curriculum Vitae (CV) of a job seeker.
Moreover, we propose to train a model which can evaluate the matching degree
between CVs and JDs as a reward model, and we use Proximal Policy Optimization
(PPO)-based Reinforcement Learning (RL) method to further fine-tine the
generator. This aligns the generator with recruiter feedback, tailoring the
output to better meet employer preferences. In particular, GIRL serves as a job
seeker-centric generative model, providing job suggestions without the need of
a candidate set. This capability also enhances the performance of existing job
recommendation models by supplementing job seeking features with generated
content. With extensive experiments on a large-scale real-world dataset, we
demonstrate the substantial effectiveness of our approach. We believe that GIRL
introduces a paradigm-shifting approach to job recommendation systems,
fostering a more personalized and comprehensive job-seeking experience.
Related papers
- STAR: A Simple Training-free Approach for Recommendations using Large Language Models [36.18841135511487]
Recent progress in large language models (LLMs) offers promising new approaches for recommendation system (RecSys) tasks.
We propose a framework that utilizes LLMs and can be applied to various recommendation tasks without the need for fine-tuning.
Our method achieves Hits@10 performance of +23.8% on Beauty, +37.5% on Toys and Games, and -1.8% on Sports and Outdoors.
arXiv Detail & Related papers (2024-10-21T19:34:40Z) - Proactive Agent: Shifting LLM Agents from Reactive Responses to Active Assistance [95.03771007780976]
We tackle the challenge of developing proactive agents capable of anticipating and initiating tasks without explicit human instructions.
First, we collect real-world human activities to generate proactive task predictions.
These predictions are labeled by human annotators as either accepted or rejected.
The labeled data is used to train a reward model that simulates human judgment.
arXiv Detail & Related papers (2024-10-16T08:24:09Z) - Facilitating Multi-Role and Multi-Behavior Collaboration of Large Language Models for Online Job Seeking and Recruiting [51.54907796704785]
Existing methods rely on modeling the latent semantics of resumes and job descriptions and learning a matching function between them.
Inspired by the powerful role-playing capabilities of Large Language Models (LLMs), we propose to introduce a mock interview process between LLM-played interviewers and candidates.
We propose MockLLM, a novel applicable framework that divides the person-job matching process into two modules: mock interview generation and two-sided evaluation in handshake protocol.
arXiv Detail & Related papers (2024-05-28T12:23:16Z) - JobFormer: Skill-Aware Job Recommendation with Semantic-Enhanced Transformer [36.695509840067906]
Job recommendation aims to provide potential talents with suitable job descriptions consistent with their career trajectory.
In real-world management scenarios, the available JD-user records always consist of JDs, user profiles, and click data.
We propose a novel skill-aware recommendation model based on the designed semantic-enhanced transformer to parse JDs and complete personalized job recommendation.
arXiv Detail & Related papers (2024-04-05T12:25:00Z) - RecMind: Large Language Model Powered Agent For Recommendation [16.710558148184205]
RecMind is an autonomous recommender agent with careful planning for zero-shot personalized recommendations.
Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation baseline methods in various tasks.
arXiv Detail & Related papers (2023-08-28T04:31:04Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - GenRec: Large Language Model for Generative Recommendation [41.22833600362077]
This paper presents an innovative approach to recommendation systems using large language models (LLMs) based on text data.
GenRec uses LLM's understanding ability to interpret context, learn user preferences, and generate relevant recommendation.
Our research underscores the potential of LLM-based generative recommendation in revolutionizing the domain of recommendation systems.
arXiv Detail & Related papers (2023-07-02T02:37:07Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z) - Large Language Models are Zero-Shot Rankers for Recommender Systems [76.02500186203929]
This work aims to investigate the capacity of large language models (LLMs) to act as the ranking model for recommender systems.
We show that LLMs have promising zero-shot ranking abilities but struggle to perceive the order of historical interactions.
We demonstrate that these issues can be alleviated using specially designed prompting and bootstrapping strategies.
arXiv Detail & Related papers (2023-05-15T17:57:39Z) - Recommendation as Instruction Following: A Large Language Model
Empowered Recommendation Approach [83.62750225073341]
We consider recommendation as instruction following by large language models (LLMs)
We first design a general instruction format for describing the preference, intention, task form and context of a user in natural language.
Then we manually design 39 instruction templates and automatically generate a large amount of user-personalized instruction data.
arXiv Detail & Related papers (2023-05-11T17:39:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.