Combining Supervised Learning and Reinforcement Learning for Multi-Label Classification Tasks with Partial Labels
- URL: http://arxiv.org/abs/2406.16293v1
- Date: Mon, 24 Jun 2024 03:36:19 GMT
- Title: Combining Supervised Learning and Reinforcement Learning for Multi-Label Classification Tasks with Partial Labels
- Authors: Zixia Jia, Junpeng Li, Shichuan Zhang, Anji Liu, Zilong Zheng,
- Abstract summary: We propose an RL-based framework combining the exploration ability of reinforcement learning and the exploitation ability of supervised learning.
Experimental results across various tasks, including document-level relation extraction, demonstrate the generalization and effectiveness of our framework.
- Score: 27.53399899573121
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional supervised learning heavily relies on human-annotated datasets, especially in data-hungry neural approaches. However, various tasks, especially multi-label tasks like document-level relation extraction, pose challenges in fully manual annotation due to the specific domain knowledge and large class sets. Therefore, we address the multi-label positive-unlabelled learning (MLPUL) problem, where only a subset of positive classes is annotated. We propose Mixture Learner for Partially Annotated Classification (MLPAC), an RL-based framework combining the exploration ability of reinforcement learning and the exploitation ability of supervised learning. Experimental results across various tasks, including document-level relation extraction, multi-label image classification, and binary PU learning, demonstrate the generalization and effectiveness of our framework.
Related papers
- Multi-level Shared Knowledge Guided Learning for Knowledge Graph Completion [26.40236457109129]
We introduce a multi-level Shared Knowledge Guided learning method (SKG) that operates at both the dataset and task levels.
On the dataset level, SKG-KGC broadens the original dataset by identifying shared features within entity sets via text summarization.
On the task level, for the three typical KGC subtasks - head entity prediction, relation prediction, and tail entity prediction - we present an innovative multi-task learning architecture with dynamically adjusted loss weights.
arXiv Detail & Related papers (2024-05-08T03:27:46Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Disentangled Latent Spaces Facilitate Data-Driven Auxiliary Learning [15.41342100228504]
In deep learning, auxiliary objectives are often used to facilitate learning in situations where data is scarce.
We propose a novel framework, dubbed Detaux, whereby a weakly supervised disentanglement procedure is used to discover new unrelated classification tasks.
arXiv Detail & Related papers (2023-10-13T17:40:39Z) - Multi-Label Knowledge Distillation [86.03990467785312]
We propose a novel multi-label knowledge distillation method.
On one hand, it exploits the informative semantic knowledge from the logits by dividing the multi-label learning problem into a set of binary classification problems.
On the other hand, it enhances the distinctiveness of the learned feature representations by leveraging the structural information of label-wise embeddings.
arXiv Detail & Related papers (2023-08-12T03:19:08Z) - Pre-training Multi-task Contrastive Learning Models for Scientific
Literature Understanding [52.723297744257536]
Pre-trained language models (LMs) have shown effectiveness in scientific literature understanding tasks.
We propose a multi-task contrastive learning framework, SciMult, to facilitate common knowledge sharing across different literature understanding tasks.
arXiv Detail & Related papers (2023-05-23T16:47:22Z) - A Multi-label Continual Learning Framework to Scale Deep Learning
Approaches for Packaging Equipment Monitoring [57.5099555438223]
We study multi-label classification in the continual scenario for the first time.
We propose an efficient approach that has a logarithmic complexity with regard to the number of tasks.
We validate our approach on a real-world multi-label Forecasting problem from the packaging industry.
arXiv Detail & Related papers (2022-08-08T15:58:39Z) - Nested Collaborative Learning for Long-Tailed Visual Recognition [71.6074806468641]
NCL consists of two core components, namely Nested Individual Learning (NIL) and Nested Balanced Online Distillation (NBOD)
To learn representations more thoroughly, both NIL and NBOD are formulated in a nested way, in which the learning is conducted on not just all categories from a full perspective but some hard categories from a partial perspective.
In the NCL, the learning from two perspectives is nested, highly related and complementary, and helps the network to capture not only global and robust features but also meticulous distinguishing ability.
arXiv Detail & Related papers (2022-03-29T08:55:39Z) - Learning Multiple Dense Prediction Tasks from Partially Annotated Data [41.821234589075445]
We look at jointly learning of multiple dense prediction tasks on partially annotated data, which we call multi-task partially-supervised learning.
We propose a multi-task training procedure that successfully leverages task relations to supervise its multi-task learning when data is partially annotated.
We rigorously demonstrate that our proposed method effectively exploits the images with unlabelled tasks and outperforms existing semi-supervised learning approaches and related methods on three standard benchmarks.
arXiv Detail & Related papers (2021-11-29T19:03:12Z) - Open-Set Representation Learning through Combinatorial Embedding [62.05670732352456]
We are interested in identifying novel concepts in a dataset through representation learning based on the examples in both labeled and unlabeled classes.
We propose a learning approach, which naturally clusters examples in unseen classes using the compositional knowledge given by multiple supervised meta-classifiers on heterogeneous label spaces.
The proposed algorithm discovers novel concepts via a joint optimization of enhancing the discrimitiveness of unseen classes as well as learning the representations of known classes generalizable to novel ones.
arXiv Detail & Related papers (2021-06-29T11:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.