Predicting Job Titles from Job Descriptions with Multi-label Text
Classification
- URL: http://arxiv.org/abs/2112.11052v1
- Date: Tue, 21 Dec 2021 09:31:03 GMT
- Title: Predicting Job Titles from Job Descriptions with Multi-label Text
Classification
- Authors: Hieu Trung Tran, Hanh Hong Phuc Vo, Son T. Luu
- Abstract summary: We propose the multi-label classification approach for predicting relevant job titles from job description texts.
We implement the Bi-GRU-LSTM-CNN with different pre-trained language models to apply for the job titles prediction problem.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Finding a suitable job and hunting for eligible candidates are important to
job seeking and human resource agencies. With the vast information about job
descriptions, employees and employers need assistance to automatically detect
job titles based on job description texts. In this paper, we propose the
multi-label classification approach for predicting relevant job titles from job
description texts, and implement the Bi-GRU-LSTM-CNN with different pre-trained
language models to apply for the job titles prediction problem. The BERT with
multilingual pre-trained model obtains the highest result by F1-scores on both
development and test sets, which are 62.20% on the development set, and 47.44%
on the test set.
Related papers
- TAROT: A Hierarchical Framework with Multitask Co-Pretraining on
Semi-Structured Data towards Effective Person-Job Fit [60.31175803899285]
We propose TAROT, a hierarchical multitask co-pretraining framework, to better utilize structural and semantic information for informative text embeddings.
TAROT targets semi-structured text in profiles and jobs, and it is co-pretained with multi-grained pretraining tasks to constrain the acquired semantic information at each level.
arXiv Detail & Related papers (2024-01-15T07:57:58Z) - Hierarchical Classification of Transversal Skills in Job Ads Based on
Sentence Embeddings [0.0]
This paper aims to identify correlations between job ad requirements and skill sets using a deep learning model.
The approach involves data collection, preprocessing, and labeling using ESCO (European Skills, Competences, and Occupations) taxonomy.
arXiv Detail & Related papers (2024-01-10T11:07:32Z) - Unify word-level and span-level tasks: NJUNLP's Participation for the
WMT2023 Quality Estimation Shared Task [59.46906545506715]
We introduce the NJUNLP team to the WMT 2023 Quality Estimation (QE) shared task.
Our team submitted predictions for the English-German language pair on all two sub-tasks.
Our models achieved the best results in English-German for both word-level and fine-grained error span detection sub-tasks.
arXiv Detail & Related papers (2023-09-23T01:52:14Z) - VacancySBERT: the approach for representation of titles and skills for
semantic similarity search in the recruitment domain [0.0]
The paper focuses on deep learning semantic search algorithms applied in the HR domain.
The aim of the article is developing a novel approach to training a Siamese network to link the skills mentioned in the job ad with the title.
arXiv Detail & Related papers (2023-07-31T13:21:15Z) - Learning Job Titles Similarity from Noisy Skill Labels [0.11498015270151059]
Measuring semantic similarity between job titles is an essential functionality for automatic job recommendations.
In this paper, we propose an unsupervised representation learning method for training a job title similarity model using noisy skill labels.
arXiv Detail & Related papers (2022-07-01T15:30:10Z) - Bridging Cross-Lingual Gaps During Leveraging the Multilingual
Sequence-to-Sequence Pretraining for Text Generation [80.16548523140025]
We extend the vanilla pretrain-finetune pipeline with extra code-switching restore task to bridge the gap between the pretrain and finetune stages.
Our approach could narrow the cross-lingual sentence representation distance and improve low-frequency word translation with trivial computational cost.
arXiv Detail & Related papers (2022-04-16T16:08:38Z) - Improving Multi-task Generalization Ability for Neural Text Matching via
Prompt Learning [54.66399120084227]
Recent state-of-the-art neural text matching models (PLMs) are hard to generalize to different tasks.
We adopt a specialization-generalization training strategy and refer to it as Match-Prompt.
In specialization stage, descriptions of different matching tasks are mapped to only a few prompt tokens.
In generalization stage, text matching model explores the essential matching signals by being trained on diverse multiple matching tasks.
arXiv Detail & Related papers (2022-04-06T11:01:08Z) - JobBERT: Understanding Job Titles through Skills [12.569546741576515]
Job titles form a cornerstone of today's human resources (HR) processes.
Job titles are a compact, convenient, and readily available data source.
We propose a neural representation model for job titles, by augmenting a pre-trained language model with co-occurrence information from skill labels extracted from vacancies.
arXiv Detail & Related papers (2021-09-20T15:00:10Z) - Job2Vec: Job Title Benchmarking with Collective Multi-View
Representation Learning [51.34011135329063]
Job Title Benchmarking (JTB) aims at matching job titles with similar expertise levels across various companies.
Traditional JTB approaches mainly rely on manual market surveys, which is expensive and labor-intensive.
We reformulate the JTB as the task of link prediction over the Job-Graph that matched job titles should have links.
arXiv Detail & Related papers (2020-09-16T02:33:32Z) - Kungfupanda at SemEval-2020 Task 12: BERT-Based Multi-Task Learning for
Offensive Language Detection [55.445023584632175]
We build an offensive language detection system, which combines multi-task learning with BERT-based models.
Our model achieves 91.51% F1 score in English Sub-task A, which is comparable to the first place.
arXiv Detail & Related papers (2020-04-28T11:27:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.