ResumeFlow: An LLM-facilitated Pipeline for Personalized Resume Generation and Refinement
- URL: http://arxiv.org/abs/2402.06221v2
- Date: Wed, 8 May 2024 03:09:10 GMT
- Title: ResumeFlow: An LLM-facilitated Pipeline for Personalized Resume Generation and Refinement
- Authors: Saurabh Bhausaheb Zinjad, Amrita Bhattacharjee, Amey Bhilegaonkar, Huan Liu,
- Abstract summary: We propose ResumeFlow: a Large Language Model (LLM) aided tool that enables an end user to simply provide their detailed resume and the desired job posting.
Our proposed pipeline leverages the language understanding and information extraction capabilities of state-of-the-art LLMs such as OpenAI's GPT-4 and Google's Gemini.
Our easy-to-use tool leverages the user-chosen LLM in a completely off-the-shelf manner, thus requiring no fine-tuning.
- Score: 14.044324268372847
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Crafting the ideal, job-specific resume is a challenging task for many job applicants, especially for early-career applicants. While it is highly recommended that applicants tailor their resume to the specific role they are applying for, manually tailoring resumes to job descriptions and role-specific requirements is often (1) extremely time-consuming, and (2) prone to human errors. Furthermore, performing such a tailoring step at scale while applying to several roles may result in a lack of quality of the edited resumes. To tackle this problem, in this demo paper, we propose ResumeFlow: a Large Language Model (LLM) aided tool that enables an end user to simply provide their detailed resume and the desired job posting, and obtain a personalized resume specifically tailored to that specific job posting in the matter of a few seconds. Our proposed pipeline leverages the language understanding and information extraction capabilities of state-of-the-art LLMs such as OpenAI's GPT-4 and Google's Gemini, in order to (1) extract details from a job description, (2) extract role-specific details from the user-provided resume, and then (3) use these to refine and generate a role-specific resume for the user. Our easy-to-use tool leverages the user-chosen LLM in a completely off-the-shelf manner, thus requiring no fine-tuning. We demonstrate the effectiveness of our tool via a video demo and propose novel task-specific evaluation metrics to control for alignment and hallucination. Our tool is available at https://job-aligned-resume.streamlit.app.
Related papers
- ConFit: Improving Resume-Job Matching using Data Augmentation and
Contrastive Learning [20.599962663046007]
We tackle the sparsity problem using data augmentations and a simple contrastive learning approach.
ConFit first creates an augmented resume-job dataset by paraphrasing specific sections in a resume or a job post.
We evaluate ConFit on two real-world datasets and find it outperforms prior methods by up to 31% and absolute in nDCG@10 for ranking jobs and ranking resumes, respectively.
arXiv Detail & Related papers (2024-01-29T17:55:18Z) - Distilling Large Language Models using Skill-Occupation Graph Context
for HR-Related Tasks [8.235367170516769]
We introduce the Resume-Job Description Benchmark (RJDB) to cater to a wide array of HR tasks.
Our benchmark includes over 50 thousand triples of job descriptions, matched resumes and unmatched resumes.
Our experiments reveal that the student models achieve near/better performance than the teacher model (GPT-4), affirming the effectiveness of the benchmark.
arXiv Detail & Related papers (2023-11-10T20:25:42Z) - Eliciting Human Preferences with Language Models [56.68637202313052]
Language models (LMs) can be directed to perform target tasks by using labeled examples or natural language prompts.
We propose to use *LMs themselves* to guide the task specification process.
We study GATE in three domains: email validation, content recommendation, and moral reasoning.
arXiv Detail & Related papers (2023-10-17T21:11:21Z) - JobRecoGPT -- Explainable job recommendations using LLMs [1.6317061277457001]
Large Language Models (LLMs) have taken over the AI field by storm with extraordinary performance in fields where text-based data is available.
In this study, we compare performance of four different approaches for job recommendations.
arXiv Detail & Related papers (2023-09-21T06:25:28Z) - Multi-Task Instruction Tuning of LLaMa for Specific Scenarios: A
Preliminary Study on Writing Assistance [60.40541387785977]
Small foundational models can display remarkable proficiency in tackling diverse tasks when fine-tuned using instruction-driven data.
In this work, we investigate a practical problem setting where the primary focus is on one or a few particular tasks rather than general-purpose instruction following.
Experimental results show that fine-tuning LLaMA on writing instruction data significantly improves its ability on writing tasks.
arXiv Detail & Related papers (2023-05-22T16:56:44Z) - Large Language Models are Zero-Shot Rankers for Recommender Systems [76.02500186203929]
This work aims to investigate the capacity of large language models (LLMs) to act as the ranking model for recommender systems.
We show that LLMs have promising zero-shot ranking abilities but struggle to perceive the order of historical interactions.
We demonstrate that these issues can be alleviated using specially designed prompting and bootstrapping strategies.
arXiv Detail & Related papers (2023-05-15T17:57:39Z) - JobHam-place with smart recommend job options and candidate filtering
options [0.0]
Job recommendation and CV ranking starts from the automatic keyword extraction and end with the Job/CV ranking algorithm.
Job2Skill consists of two components, text encoder and Gru-based layers, while CV2Skill is mainly based on Bert.
Job/CV ranking algorithms have been provided to compute the occurrence ratio of skill words based on TFIDF score and match ratio of the total skill numbers.
arXiv Detail & Related papers (2023-03-31T09:54:47Z) - AnnoLLM: Making Large Language Models to Be Better Crowdsourced Annotators [98.11286353828525]
GPT-3.5 series models have demonstrated remarkable few-shot and zero-shot ability across various NLP tasks.
We propose AnnoLLM, which adopts a two-step approach, explain-then-annotate.
We build the first conversation-based information retrieval dataset employing AnnoLLM.
arXiv Detail & Related papers (2023-03-29T17:03:21Z) - AdaPrompt: Adaptive Model Training for Prompt-based NLP [77.12071707955889]
We propose AdaPrompt, adaptively retrieving external data for continual pretraining of PLMs.
Experimental results on five NLP benchmarks show that AdaPrompt can improve over standard PLMs in few-shot settings.
In zero-shot settings, our method outperforms standard prompt-based methods by up to 26.35% relative error reduction.
arXiv Detail & Related papers (2022-02-10T04:04:57Z) - Annotation Curricula to Implicitly Train Non-Expert Annotators [56.67768938052715]
voluntary studies often require annotators to familiarize themselves with the task, its annotation scheme, and the data domain.
This can be overwhelming in the beginning, mentally taxing, and induce errors into the resulting annotations.
We propose annotation curricula, a novel approach to implicitly train annotators.
arXiv Detail & Related papers (2021-06-04T09:48:28Z) - Learning Effective Representations for Person-Job Fit by Feature Fusion [4.884826427985207]
Person-job fit is to match candidates and job posts on online recruitment platforms using machine learning algorithms.
In this paper, we propose to learn comprehensive and effective representations of the candidates and job posts via feature fusion.
Experiments over 10 months real data show that our solution outperforms existing methods with a large margin.
arXiv Detail & Related papers (2020-06-12T09:02:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.