SkillMatch: Evaluating Self-supervised Learning of Skill Relatedness
- URL: http://arxiv.org/abs/2410.05006v1
- Date: Mon, 7 Oct 2024 13:05:26 GMT
- Title: SkillMatch: Evaluating Self-supervised Learning of Skill Relatedness
- Authors: Jens-Joris Decorte, Jeroen Van Hautte, Thomas Demeester, Chris Develder,
- Abstract summary: We release SkillMatch, a benchmark for the task of skill relatedness based on expert knowledge mining from millions of job ads.
We also propose a scalable self-supervised learning technique to adapt a Sentence-BERT model based on skill co-occurrence in job ads.
- Score: 11.083396379885478
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurately modeling the relationships between skills is a crucial part of human resources processes such as recruitment and employee development. Yet, no benchmarks exist to evaluate such methods directly. We construct and release SkillMatch, a benchmark for the task of skill relatedness, based on expert knowledge mining from millions of job ads. Additionally, we propose a scalable self-supervised learning technique to adapt a Sentence-BERT model based on skill co-occurrence in job ads. This new method greatly surpasses traditional models for skill relatedness as measured on SkillMatch. By releasing SkillMatch publicly, we aim to contribute a foundation for research towards increased accuracy and transparency of skill-based recommendation systems.
Related papers
- Behavior Contrastive Learning for Unsupervised Skill Discovery [75.6190748711826]
We propose a novel unsupervised skill discovery method through contrastive learning among behaviors.
Under mild assumptions, our objective maximizes the MI between different behaviors based on the same skill.
Our method implicitly increases the state entropy to obtain better state coverage.
arXiv Detail & Related papers (2023-05-08T06:02:11Z) - Choreographer: Learning and Adapting Skills in Imagination [60.09911483010824]
We present Choreographer, a model-based agent that exploits its world model to learn and adapt skills in imagination.
Our method decouples the exploration and skill learning processes, being able to discover skills in the latent state space of the model.
Choreographer is able to learn skills both from offline data, and by collecting data simultaneously with an exploration policy.
arXiv Detail & Related papers (2022-11-23T23:31:14Z) - Residual Skill Policies: Learning an Adaptable Skill-based Action Space
for Reinforcement Learning for Robotics [18.546688182454236]
Skill-based reinforcement learning (RL) has emerged as a promising strategy to leverage prior knowledge for accelerated robot learning.
We propose accelerating exploration in the skill space using state-conditioned generative models.
We validate our approach across four challenging manipulation tasks, demonstrating our ability to learn across task variations.
arXiv Detail & Related papers (2022-11-04T02:42:17Z) - Skill-Based Reinforcement Learning with Intrinsic Reward Matching [77.34726150561087]
We present Intrinsic Reward Matching (IRM), which unifies task-agnostic skill pretraining and task-aware finetuning.
IRM enables us to utilize pretrained skills far more effectively than previous skill selection methods.
arXiv Detail & Related papers (2022-10-14T00:04:49Z) - Design of Negative Sampling Strategies for Distantly Supervised Skill
Extraction [19.43668931500507]
We propose an end-to-end system for skill extraction, based on distant supervision through literal matching.
We observe that using the ESCO taxonomy to select negative examples from related skills yields the biggest improvements.
We release the benchmark dataset for research purposes to stimulate further research on the task.
arXiv Detail & Related papers (2022-09-13T13:37:06Z) - Hierarchical Kickstarting for Skill Transfer in Reinforcement Learning [27.69559938165733]
Practising and honing skills forms a fundamental component of how humans learn, yet artificial agents are rarely specifically trained to perform them.
We investigate how skills can be incorporated into the training of reinforcement learning (RL) agents in complex environments.
Our experiments show that learning with a prior knowledge of useful skills can significantly improve the performance of agents on complex problems.
arXiv Detail & Related papers (2022-07-23T19:23:29Z) - Rethinking Learning Dynamics in RL using Adversarial Networks [79.56118674435844]
We present a learning mechanism for reinforcement learning of closely related skills parameterized via a skill embedding space.
The main contribution of our work is to formulate an adversarial training regime for reinforcement learning with the help of entropy-regularized policy gradient formulation.
arXiv Detail & Related papers (2022-01-27T19:51:09Z) - Hierarchical Skills for Efficient Exploration [70.62309286348057]
In reinforcement learning, pre-trained low-level skills have the potential to greatly facilitate exploration.
Prior knowledge of the downstream task is required to strike the right balance between generality (fine-grained control) and specificity (faster learning) in skill design.
We propose a hierarchical skill learning framework that acquires skills of varying complexity in an unsupervised manner.
arXiv Detail & Related papers (2021-10-20T22:29:32Z) - Discovering Generalizable Skills via Automated Generation of Diverse
Tasks [82.16392072211337]
We propose a method to discover generalizable skills via automated generation of a diverse set of tasks.
As opposed to prior work on unsupervised discovery of skills, our method pairs each skill with a unique task produced by a trainable task generator.
A task discriminator defined on the robot behaviors in the generated tasks is jointly trained to estimate the evidence lower bound of the diversity objective.
The learned skills can then be composed in a hierarchical reinforcement learning algorithm to solve unseen target tasks.
arXiv Detail & Related papers (2021-06-26T03:41:51Z) - Skillearn: Machine Learning Inspired by Humans' Learning Skills [15.125072827275766]
We are interested in investigating whether humans' learning skills can be borrowed to help machines learn better.
Specifically, we aim to formalize these skills and leverage them to train better machine learning (ML) models.
To achieve this goal, we develop a general framework -- Skillearn, which provides a principled way to represent humans' learning skills mathematically.
In two case studies, we apply Skillearn to formalize two learning skills of humans: learning by passing tests and interleaving learning, and use the formalized skills to improve neural architecture search.
arXiv Detail & Related papers (2020-12-09T04:56:22Z) - Accelerating Reinforcement Learning with Learned Skill Priors [20.268358783821487]
Most modern reinforcement learning approaches learn every task from scratch.
One approach for leveraging prior knowledge is to transfer skills learned on prior tasks to the new task.
We show that learned skill priors are essential for effective skill transfer from rich datasets.
arXiv Detail & Related papers (2020-10-22T17:59:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.