JobRecoGPT -- Explainable job recommendations using LLMs
- URL: http://arxiv.org/abs/2309.11805v1
- Date: Thu, 21 Sep 2023 06:25:28 GMT
- Title: JobRecoGPT -- Explainable job recommendations using LLMs
- Authors: Preetam Ghosh, Vaishali Sadaphal
- Abstract summary: Large Language Models (LLMs) have taken over the AI field by storm with extraordinary performance in fields where text-based data is available.
In this study, we compare performance of four different approaches for job recommendations.
- Score: 1.6317061277457001
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In today's rapidly evolving job market, finding the right opportunity can be
a daunting challenge. With advancements in the field of AI, computers can now
recommend suitable jobs to candidates. However, the task of recommending jobs
is not same as recommending movies to viewers. Apart from must-have criteria,
like skills and experience, there are many subtle aspects to a job which can
decide if it is a good fit or not for a given candidate. Traditional approaches
can capture the quantifiable aspects of jobs and candidates, but a substantial
portion of the data that is present in unstructured form in the job
descriptions and resumes is lost in the process of conversion to structured
format. As of late, Large Language Models (LLMs) have taken over the AI field
by storm with extraordinary performance in fields where text-based data is
available. Inspired by the superior performance of LLMs, we leverage their
capability to understand natural language for capturing the information that
was previously getting lost during the conversion of unstructured data to
structured form. To this end, we compare performance of four different
approaches for job recommendations namely, (i) Content based deterministic,
(ii) LLM guided, (iii) LLM unguided, and (iv) Hybrid. In this study, we present
advantages and limitations of each method and evaluate their performance in
terms of time requirements.
Related papers
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning [70.21358720599821]
Large language models (LLMs) hold the promise of solving diverse tasks when provided with appropriate natural language prompts.
We propose SELF-GUIDE, a multi-stage mechanism in which we synthesize task-specific input-output pairs from the student LLM.
We report an absolute improvement of approximately 15% for classification tasks and 18% for generation tasks in the benchmark's metrics.
arXiv Detail & Related papers (2024-07-16T04:41:58Z) - Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More? [54.667202878390526]
Long-context language models (LCLMs) have the potential to revolutionize our approach to tasks traditionally reliant on external tools like retrieval systems or databases.
We introduce LOFT, a benchmark of real-world tasks requiring context up to millions of tokens designed to evaluate LCLMs' performance on in-context retrieval and reasoning.
Our findings reveal LCLMs' surprising ability to rival state-of-the-art retrieval and RAG systems, despite never having been explicitly trained for these tasks.
arXiv Detail & Related papers (2024-06-19T00:28:58Z) - A Survey on Prompting Techniques in LLMs [0.0]
Autoregressive Large Language Models have transformed the landscape of Natural Language Processing.
We present a taxonomy of existing literature on prompting techniques and provide a concise survey based on this taxonomy.
We identify some open problems in the realm of prompting in autoregressive LLMs which could serve as a direction for future research.
arXiv Detail & Related papers (2023-11-28T17:56:34Z) - LLM4Jobs: Unsupervised occupation extraction and standardization
leveraging Large Language Models [14.847441358093866]
This paper introduces LLM4Jobs, a novel unsupervised methodology that taps into the capabilities of large language models (LLMs) for occupation coding.
Evaluated on rigorous experimentation on synthetic and real-world datasets, we demonstrate that LLM4Jobs consistently surpasses unsupervised state-of-the-art benchmarks.
arXiv Detail & Related papers (2023-09-18T12:22:00Z) - How Can Recommender Systems Benefit from Large Language Models: A Survey [82.06729592294322]
Large language models (LLM) have shown impressive general intelligence and human-like capabilities.
We conduct a comprehensive survey on this research direction from the perspective of the whole pipeline in real-world recommender systems.
arXiv Detail & Related papers (2023-06-09T11:31:50Z) - OverPrompt: Enhancing ChatGPT through Efficient In-Context Learning [49.38867353135258]
We propose OverPrompt, leveraging the in-context learning capability of LLMs to handle multiple task inputs.
Our experiments show that OverPrompt can achieve cost-efficient zero-shot classification without causing significant detriment to task performance.
arXiv Detail & Related papers (2023-05-24T10:08:04Z) - Editing Large Language Models: Problems, Methods, and Opportunities [51.903537096207]
This paper embarks on a deep exploration of the problems, methods, and opportunities related to model editing for LLMs.
We provide an exhaustive overview of the task definition and challenges associated with model editing, along with an in-depth empirical analysis of the most progressive methods currently at our disposal.
Our objective is to provide valuable insights into the effectiveness and feasibility of each editing technique, thereby assisting the community in making informed decisions on the selection of the most appropriate method for a specific task or context.
arXiv Detail & Related papers (2023-05-22T16:00:00Z) - Document-Level Machine Translation with Large Language Models [91.03359121149595]
Large language models (LLMs) can produce coherent, cohesive, relevant, and fluent answers for various natural language processing (NLP) tasks.
This paper provides an in-depth evaluation of LLMs' ability on discourse modeling.
arXiv Detail & Related papers (2023-04-05T03:49:06Z) - "FIJO": a French Insurance Soft Skill Detection Dataset [0.0]
This article proposes a new public dataset, FIJO, containing insurance job offers, including many soft skill annotations.
We present the results of skill detection algorithms using a named entity recognition approach and show that transformers-based models have good token-wise performances on this dataset.
arXiv Detail & Related papers (2022-04-11T15:54:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.