Skill Extraction from Job Postings using Weak Supervision
- URL: http://arxiv.org/abs/2209.08071v1
- Date: Fri, 16 Sep 2022 17:35:19 GMT
- Title: Skill Extraction from Job Postings using Weak Supervision
- Authors: Mike Zhang, Kristian N{\o}rgaard Jensen, Rob van der Goot, Barbara
Plank
- Abstract summary: We propose Skill Extraction with Weak Supervision.
We leverage the European Skills, Competences, Qualifications and Occupations taxonomy to find similar skills in job ads via latent representations.
The method shows a strong positive signal, outperforming baselines based on token-level and syntactic patterns.
- Score: 23.174550050715954
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Aggregated data obtained from job postings provide powerful insights into
labor market demands, and emerging skills, and aid job matching. However, most
extraction approaches are supervised and thus need costly and time-consuming
annotation. To overcome this, we propose Skill Extraction with Weak
Supervision. We leverage the European Skills, Competences, Qualifications and
Occupations taxonomy to find similar skills in job ads via latent
representations. The method shows a strong positive signal, outperforming
baselines based on token-level and syntactic patterns.
Related papers
- Computational Job Market Analysis with Natural Language Processing [5.117211717291377]
This thesis investigates Natural Language Processing (NLP) technology for extracting relevant information from job descriptions.
We frame the problem, obtaining annotated data, and introducing extraction methodologies.
Our contributions include job description datasets, a de-identification dataset, and a novel active learning algorithm for efficient model training.
arXiv Detail & Related papers (2024-04-29T14:52:38Z) - Rethinking Skill Extraction in the Job Market Domain using Large
Language Models [20.256353240384133]
Skill Extraction involves identifying skills and qualifications mentioned in documents such as job postings and resumes.
The reliance on manually annotated data limits the generalizability of such approaches.
In this paper, we explore the use of in-context learning to overcome these challenges.
arXiv Detail & Related papers (2024-02-06T09:23:26Z) - NNOSE: Nearest Neighbor Occupational Skill Extraction [55.22292957778972]
We tackle the complexity in occupational skill datasets.
We employ an external datastore for retrieving similar skills in a dataset-unifying manner.
We observe a performance gain in predicting infrequent patterns, with substantial gains of up to 30% span-F1 in cross-dataset settings.
arXiv Detail & Related papers (2024-01-30T15:18:29Z) - Extreme Multi-Label Skill Extraction Training using Large Language
Models [19.095612333241288]
We describe a cost-effective approach to generate an accurate, fully synthetic labeled dataset for skill extraction.
Our results show a consistent increase of between 15 to 25 percentage points in textitR-Precision@5 compared to previously published results.
arXiv Detail & Related papers (2023-07-20T11:29:15Z) - Is it Required? Ranking the Skills Required for a Job-Title [0.0]
We show that important/relevant skills appear more frequently in similar job titles.
We train a Language-agnostic BERT Sentence (LaBSE) model to predict the importance of the skills using weak supervision.
arXiv Detail & Related papers (2022-11-28T10:27:11Z) - Skill-Based Reinforcement Learning with Intrinsic Reward Matching [77.34726150561087]
We present Intrinsic Reward Matching (IRM), which unifies task-agnostic skill pretraining and task-aware finetuning.
IRM enables us to utilize pretrained skills far more effectively than previous skill selection methods.
arXiv Detail & Related papers (2022-10-14T00:04:49Z) - Design of Negative Sampling Strategies for Distantly Supervised Skill
Extraction [19.43668931500507]
We propose an end-to-end system for skill extraction, based on distant supervision through literal matching.
We observe that using the ESCO taxonomy to select negative examples from related skills yields the biggest improvements.
We release the benchmark dataset for research purposes to stimulate further research on the task.
arXiv Detail & Related papers (2022-09-13T13:37:06Z) - Conditional Supervised Contrastive Learning for Fair Text Classification [59.813422435604025]
We study learning fair representations that satisfy a notion of fairness known as equalized odds for text classification via contrastive learning.
Specifically, we first theoretically analyze the connections between learning representations with a fairness constraint and conditional supervised contrastive objectives.
arXiv Detail & Related papers (2022-05-23T17:38:30Z) - Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon
Reasoning [120.38381203153159]
Reinforcement learning can train policies that effectively perform complex tasks.
For long-horizon tasks, the performance of these methods degrades with horizon, often necessitating reasoning over and composing lower-level skills.
We propose Value Function Spaces: a simple approach that produces such a representation by using the value functions corresponding to each lower-level skill.
arXiv Detail & Related papers (2021-11-04T22:46:16Z) - Hierarchical Skills for Efficient Exploration [70.62309286348057]
In reinforcement learning, pre-trained low-level skills have the potential to greatly facilitate exploration.
Prior knowledge of the downstream task is required to strike the right balance between generality (fine-grained control) and specificity (faster learning) in skill design.
We propose a hierarchical skill learning framework that acquires skills of varying complexity in an unsupervised manner.
arXiv Detail & Related papers (2021-10-20T22:29:32Z) - Reciprocal Feature Learning via Explicit and Implicit Tasks in Scene
Text Recognition [60.36540008537054]
In this work, we excavate the implicit task, character counting within the traditional text recognition, without additional labor annotation cost.
We design a two-branch reciprocal feature learning framework in order to adequately utilize the features from both the tasks.
Experiments on 7 benchmarks show the advantages of the proposed methods in both text recognition and the new-built character counting tasks.
arXiv Detail & Related papers (2021-05-13T12:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.