Accurate Use of Label Dependency in Multi-Label Text Classification
Through the Lens of Causality
- URL: http://arxiv.org/abs/2310.07588v1
- Date: Wed, 11 Oct 2023 15:28:44 GMT
- Title: Accurate Use of Label Dependency in Multi-Label Text Classification
Through the Lens of Causality
- Authors: Caoyun Fan, Wenqing Chen, Jidong Tian, Yitian Li, Hao He, Yaohui Jin
- Abstract summary: Multi-Label Text Classification aims to assign the most relevant labels to each given text.
Label dependency may cause the model to suffer from unwanted prediction bias.
We propose a CounterFactual Text (CFTC) to eliminate the correlation bias, and make causality-based predictions.
- Score: 25.36416774024584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-Label Text Classification (MLTC) aims to assign the most relevant
labels to each given text. Existing methods demonstrate that label dependency
can help to improve the model's performance. However, the introduction of label
dependency may cause the model to suffer from unwanted prediction bias. In this
study, we attribute the bias to the model's misuse of label dependency, i.e.,
the model tends to utilize the correlation shortcut in label dependency rather
than fusing text information and label dependency for prediction. Motivated by
causal inference, we propose a CounterFactual Text Classifier (CFTC) to
eliminate the correlation bias, and make causality-based predictions.
Specifically, our CFTC first adopts the predict-then-modify backbone to extract
precise label information embedded in label dependency, then blocks the
correlation shortcut through the counterfactual de-bias technique with the help
of the human causal graph. Experimental results on three datasets demonstrate
that our CFTC significantly outperforms the baselines and effectively
eliminates the correlation bias in datasets.
Related papers
- Towards Robust Text Classification: Mitigating Spurious Correlations with Causal Learning [2.7813683000222653]
We propose the Causally Calibrated Robust ( CCR) to reduce models' reliance on spurious correlations.
CCR integrates a causal feature selection method based on counterfactual reasoning, along with an inverse propensity weighting (IPW) loss function.
We show that CCR state-of-the-art performance among methods without group labels, and in some cases, it can compete with the models that utilize group labels.
arXiv Detail & Related papers (2024-11-01T21:29:07Z) - Substituting Data Annotation with Balanced Updates and Collective Loss
in Multi-label Text Classification [19.592985329023733]
Multi-label text classification (MLTC) is the task of assigning multiple labels to a given text.
We study the MLTC problem in annotation-free and scarce-annotation settings in which the magnitude of available supervision signals is linear to the number of labels.
Our method follows three steps, (1) mapping input text into a set of preliminary label likelihoods by natural language inference using a pre-trained language model, (2) calculating a signed label dependency graph by label descriptions, and (3) updating the preliminary label likelihoods with message passing along the label dependency graph.
arXiv Detail & Related papers (2023-09-24T04:12:52Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - Leveraging Instance Features for Label Aggregation in Programmatic Weak
Supervision [75.1860418333995]
Programmatic Weak Supervision (PWS) has emerged as a widespread paradigm to synthesize training labels efficiently.
The core component of PWS is the label model, which infers true labels by aggregating the outputs of multiple noisy supervision sources as labeling functions.
Existing statistical label models typically rely only on the outputs of LF, ignoring the instance features when modeling the underlying generative process.
arXiv Detail & Related papers (2022-10-06T07:28:53Z) - Multi-label Classification with High-rank and High-order Label
Correlations [62.39748565407201]
Previous methods capture the high-order label correlations mainly by transforming the label matrix to a latent label space with low-rank matrix factorization.
We propose a simple yet effective method to depict the high-order label correlations explicitly, and at the same time maintain the high-rank of the label matrix.
Comparative studies over twelve benchmark data sets validate the effectiveness of the proposed algorithm in multi-label classification.
arXiv Detail & Related papers (2022-07-09T05:15:31Z) - Instance-Dependent Partial Label Learning [69.49681837908511]
Partial label learning is a typical weakly supervised learning problem.
Most existing approaches assume that the incorrect labels in each training example are randomly picked as the candidate labels.
In this paper, we consider instance-dependent and assume that each example is associated with a latent label distribution constituted by the real number of each label.
arXiv Detail & Related papers (2021-10-25T12:50:26Z) - Fine-grained Entity Typing via Label Reasoning [41.05579329042479]
We propose emphLabel Reasoning Network(LRN), which sequentially reasons fine-grained entity labels.
Experiments show that LRN achieves the state-of-the-art performance on standard ultra fine-grained entity typing benchmarks.
arXiv Detail & Related papers (2021-09-13T07:08:47Z) - Label Confusion Learning to Enhance Text Classification Models [3.0251266104313643]
Label Confusion Model (LCM) learns label confusion to capture semantic overlap among labels.
LCM can generate a better label distribution to replace the original one-hot label vector.
experiments on five text classification benchmark datasets reveal the effectiveness of LCM for several widely used deep learning classification models.
arXiv Detail & Related papers (2020-12-09T11:34:35Z) - A Study on the Autoregressive and non-Autoregressive Multi-label
Learning [77.11075863067131]
We propose a self-attention based variational encoder-model to extract the label-label and label-feature dependencies jointly.
Our model can therefore be used to predict all labels in parallel while still including both label-label and label-feature dependencies.
arXiv Detail & Related papers (2020-12-03T05:41:44Z) - Few-shot Slot Tagging with Collapsed Dependency Transfer and
Label-enhanced Task-adaptive Projection Network [61.94394163309688]
We propose a Label-enhanced Task-Adaptive Projection Network (L-TapNet) based on the state-of-the-art few-shot classification model -- TapNet.
Experimental results show that our model significantly outperforms the strongest few-shot learning baseline by 14.64 F1 scores in the one-shot setting.
arXiv Detail & Related papers (2020-06-10T07:50:44Z) - Multi-Label Text Classification using Attention-based Graph Neural
Network [0.0]
A graph attention network-based model is proposed to capture the attentive dependency structure among the labels.
The proposed model achieves similar or better performance compared to the previous state-of-the-art models.
arXiv Detail & Related papers (2020-03-22T17:12:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.