JAMES: Normalizing Job Titles with Multi-Aspect Graph Embeddings and
Reasoning
- URL: http://arxiv.org/abs/2202.10739v2
- Date: Mon, 23 Oct 2023 21:30:42 GMT
- Title: JAMES: Normalizing Job Titles with Multi-Aspect Graph Embeddings and
Reasoning
- Authors: Michiharu Yamashita, Jia Tracy Shen, Thanh Tran, Hamoon Ekhtiari,
Dongwon Lee
- Abstract summary: In online job marketplaces, it is important to establish a well-defined job title taxonomy for various downstream tasks.
Job Title Normalization (JTN) is such a cleaning step to classify user-created non-standard job titles into normalized ones.
solving the JTN problem is non-trivial with challenges: (1) semantic similarity of different job titles, (2) non-normalized user-created job titles, and (3) large-scale and long-tailed job titles.
- Score: 5.5324736938802435
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In online job marketplaces, it is important to establish a well-defined job
title taxonomy for various downstream tasks (e.g., job recommendation, users'
career analysis, and turnover prediction). Job Title Normalization (JTN) is
such a cleaning step to classify user-created non-standard job titles into
normalized ones. However, solving the JTN problem is non-trivial with
challenges: (1) semantic similarity of different job titles, (2) non-normalized
user-created job titles, and (3) large-scale and long-tailed job titles in
real-world applications. To this end, we propose a novel solution, named JAMES,
that constructs three unique embeddings (i.e., graph, contextual, and
syntactic) of a target job title to effectively capture its various traits. We
further propose a multi-aspect co-attention mechanism to attentively combine
these embeddings, and employ neural logical reasoning representations to
collaboratively estimate similarities between messy job titles and normalized
job titles in a reasoning space. To evaluate JAMES, we conduct comprehensive
experiments against ten competing models on a large-scale real-world dataset
with over 350,000 job titles. Our experimental results show that JAMES
significantly outperforms the best baseline by 10.06% in Precision@10 and by
17.52% in NDCG@10, respectively.
Related papers
- Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - VacancySBERT: the approach for representation of titles and skills for
semantic similarity search in the recruitment domain [0.0]
The paper focuses on deep learning semantic search algorithms applied in the HR domain.
The aim of the article is developing a novel approach to training a Siamese network to link the skills mentioned in the job ad with the title.
arXiv Detail & Related papers (2023-07-31T13:21:15Z) - AIMS: All-Inclusive Multi-Level Segmentation [93.5041381700744]
We propose a new task, All-Inclusive Multi-Level (AIMS), which segments visual regions into three levels: part, entity, and relation.
We also build a unified AIMS model through multi-dataset multi-task training to address the two major challenges of annotation inconsistency and task correlation.
arXiv Detail & Related papers (2023-05-28T16:28:49Z) - A practical method for occupational skills detection in Vietnamese job
listings [0.16114012813668932]
Lack of accurate and timely labor market information leads to skill miss-matches.
Traditional approaches rely on existing taxonomy and/or large annotated data.
We propose a practical methodology for skill detection in Vietnamese job listings.
arXiv Detail & Related papers (2022-10-26T10:23:18Z) - Learning Job Titles Similarity from Noisy Skill Labels [0.11498015270151059]
Measuring semantic similarity between job titles is an essential functionality for automatic job recommendations.
In this paper, we propose an unsupervised representation learning method for training a job title similarity model using noisy skill labels.
arXiv Detail & Related papers (2022-07-01T15:30:10Z) - Arch-Graph: Acyclic Architecture Relation Predictor for
Task-Transferable Neural Architecture Search [96.31315520244605]
Arch-Graph is a transferable NAS method that predicts task-specific optimal architectures.
We show Arch-Graph's transferability and high sample efficiency across numerous tasks.
It is able to find top 0.16% and 0.29% architectures on average on two search spaces under the budget of only 50 models.
arXiv Detail & Related papers (2022-04-12T16:46:06Z) - Predicting Job Titles from Job Descriptions with Multi-label Text
Classification [0.0]
We propose the multi-label classification approach for predicting relevant job titles from job description texts.
We implement the Bi-GRU-LSTM-CNN with different pre-trained language models to apply for the job titles prediction problem.
arXiv Detail & Related papers (2021-12-21T09:31:03Z) - Towards Good Practices for Efficiently Annotating Large-Scale Image
Classification Datasets [90.61266099147053]
We investigate efficient annotation strategies for collecting multi-class classification labels for a large collection of images.
We propose modifications and best practices aimed at minimizing human labeling effort.
Simulated experiments on a 125k image subset of the ImageNet100 show that it can be annotated to 80% top-1 accuracy with 0.35 annotations per image on average.
arXiv Detail & Related papers (2021-04-26T16:29:32Z) - Job2Vec: Job Title Benchmarking with Collective Multi-View
Representation Learning [51.34011135329063]
Job Title Benchmarking (JTB) aims at matching job titles with similar expertise levels across various companies.
Traditional JTB approaches mainly rely on manual market surveys, which is expensive and labor-intensive.
We reformulate the JTB as the task of link prediction over the Job-Graph that matched job titles should have links.
arXiv Detail & Related papers (2020-09-16T02:33:32Z) - Language Models are Few-Shot Learners [61.36677350504291]
We show that scaling up language models greatly improves task-agnostic, few-shot performance.
We train GPT-3, an autoregressive language model with 175 billion parameters, and test its performance in the few-shot setting.
GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks.
arXiv Detail & Related papers (2020-05-28T17:29:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.