Multi-relation Message Passing for Multi-label Text Classification
- URL: http://arxiv.org/abs/2202.04844v1
- Date: Thu, 10 Feb 2022 05:24:37 GMT
- Title: Multi-relation Message Passing for Multi-label Text Classification
- Authors: Muberra Ozmen, Hao Zhang, Pengyun Wang, Mark Coates
- Abstract summary: We propose a novel method, entitled Multi-relation Message Passing (MrMP), for the multi-label classification problem.
Experiments on benchmark multi-label text classification datasets show that the MrMP module yields similar or superior performance compared to state-of-the-art methods.
- Score: 19.37481051508782
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A well-known challenge associated with the multi-label classification problem
is modelling dependencies between labels. Most attempts at modelling label
dependencies focus on co-occurrences, ignoring the valuable information that
can be extracted by detecting label subsets that rarely occur together. For
example, consider customer product reviews; a product probably would not
simultaneously be tagged by both "recommended" (i.e., reviewer is happy and
recommends the product) and "urgent" (i.e., the review suggests immediate
action to remedy an unsatisfactory experience). Aside from the consideration of
positive and negative dependencies, the direction of a relationship should also
be considered. For a multi-label image classification problem, the "ship" and
"sea" labels have an obvious dependency, but the presence of the former implies
the latter much more strongly than the other way around. These examples
motivate the modelling of multiple types of bi-directional relationships
between labels. In this paper, we propose a novel method, entitled
Multi-relation Message Passing (MrMP), for the multi-label classification
problem. Experiments on benchmark multi-label text classification datasets show
that the MrMP module yields similar or superior performance compared to
state-of-the-art methods. The approach imposes only minor additional
computational and memory overheads.
Related papers
- Bridging the Gap between Model Explanations in Partially Annotated
Multi-label Classification [85.76130799062379]
We study how false negative labels affect the model's explanation.
We propose to boost the attribution scores of the model trained with partial labels to make its explanation resemble that of the model trained with full labels.
arXiv Detail & Related papers (2023-04-04T14:00:59Z) - Complementary to Multiple Labels: A Correlation-Aware Correction
Approach [65.59584909436259]
We show theoretically how the estimated transition matrix in multi-class CLL could be distorted in multi-labeled cases.
We propose a two-step method to estimate the transition matrix from candidate labels.
arXiv Detail & Related papers (2023-02-25T04:48:48Z) - Category-Adaptive Label Discovery and Noise Rejection for Multi-label
Image Recognition with Partial Positive Labels [78.88007892742438]
Training multi-label models with partial positive labels (MLR-PPL) attracts increasing attention.
Previous works regard unknown labels as negative and adopt traditional MLR algorithms.
We propose to explore semantic correlation among different images to facilitate the MLR-PPL task.
arXiv Detail & Related papers (2022-11-15T02:11:20Z) - An Effective Approach for Multi-label Classification with Missing Labels [8.470008570115146]
We propose a pseudo-label based approach to reduce the cost of annotation without bringing additional complexity to the classification networks.
By designing a novel loss function, we are able to relax the requirement that each instance must contain at least one positive label.
We show that our method can handle the imbalance between positive labels and negative labels, while still outperforming existing missing-label learning approaches.
arXiv Detail & Related papers (2022-10-24T23:13:57Z) - Group is better than individual: Exploiting Label Topologies and Label
Relations for Joint Multiple Intent Detection and Slot Filling [39.76268402567324]
We construct a Heterogeneous Label Graph (HLG) containing two kinds of topologies.
Label correlations are leveraged to enhance semantic-label interactions.
We also propose the label-aware inter-dependent decoding mechanism to further exploit the label correlations for decoding.
arXiv Detail & Related papers (2022-10-19T08:21:43Z) - One Positive Label is Sufficient: Single-Positive Multi-Label Learning
with Label Enhancement [71.9401831465908]
We investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label.
A novel method named proposed, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed.
Experiments on benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-06-01T14:26:30Z) - Acknowledging the Unknown for Multi-label Learning with Single Positive
Labels [65.5889334964149]
Traditionally, all unannotated labels are assumed as negative labels in single positive multi-label learning (SPML)
We propose entropy-maximization (EM) loss to maximize the entropy of predicted probabilities for all unannotated labels.
Considering the positive-negative label imbalance of unannotated labels, we propose asymmetric pseudo-labeling (APL) with asymmetric-tolerance strategies and a self-paced procedure to provide more precise supervision.
arXiv Detail & Related papers (2022-03-30T11:43:59Z) - Evaluating Multi-label Classifiers with Noisy Labels [0.7868449549351487]
In the real world, it is more common to deal with noisy datasets than clean datasets.
We present a Context-Based Multi-Label-Classifier (CbMLC) that effectively handles noisy labels.
We show CbMLC yields substantial improvements over the previous methods in most cases.
arXiv Detail & Related papers (2021-02-16T19:50:52Z) - Label Confusion Learning to Enhance Text Classification Models [3.0251266104313643]
Label Confusion Model (LCM) learns label confusion to capture semantic overlap among labels.
LCM can generate a better label distribution to replace the original one-hot label vector.
experiments on five text classification benchmark datasets reveal the effectiveness of LCM for several widely used deep learning classification models.
arXiv Detail & Related papers (2020-12-09T11:34:35Z) - Interaction Matching for Long-Tail Multi-Label Classification [57.262792333593644]
We present an elegant and effective approach for addressing limitations in existing multi-label classification models.
By performing soft n-gram interaction matching, we match labels with natural language descriptions.
arXiv Detail & Related papers (2020-05-18T15:27:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.