Less is Better: Recovering Intended-Feature Subspace to Robustify NLU
Models
- URL: http://arxiv.org/abs/2209.07879v1
- Date: Fri, 16 Sep 2022 12:14:56 GMT
- Title: Less is Better: Recovering Intended-Feature Subspace to Robustify NLU
Models
- Authors: Ting Wu and Tao Gui
- Abstract summary: Current debiasing methods impose excessive reliance on knowledge of bias attributes.
A novel model, Recovering Intended-Feature Subspace with Knowledge-Free (RISK) is developed.
- Score: 16.693441490923675
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Datasets with significant proportions of bias present threats for training a
trustworthy model on NLU tasks. Despite yielding great progress, current
debiasing methods impose excessive reliance on the knowledge of bias
attributes. Definition of the attributes, however, is elusive and varies across
different datasets. Furthermore, leveraging these attributes at input level to
bias mitigation may leave a gap between intrinsic properties and the underlying
decision rule. To narrow down this gap and liberate the supervision on bias, we
suggest extending bias mitigation into feature space. Therefore, a novel model,
Recovering Intended-Feature Subspace with Knowledge-Free (RISK) is developed.
Assuming that shortcut features caused by various biases are unintended for
prediction, RISK views them as redundant features. When delving into a lower
manifold to remove redundancies, RISK reveals that an extremely low-dimensional
subspace with intended features can robustly represent the highly biased
dataset. Empirical results demonstrate our model can consistently improve model
generalization to out-of-distribution set, and achieves a new state-of-the-art
performance.
Related papers
- A Novel Generative Model with Causality Constraint for Mitigating Biases in Recommender Systems [20.672668625179526]
Latent confounding bias can obscure the true causal relationship between user feedback and item exposure.<n>We propose a novel generative framework called Latent Causality Constraints for Debiasing representation learning in Recommender Systems.
arXiv Detail & Related papers (2025-05-22T14:09:39Z) - Fairness without Sensitive Attributes via Knowledge Sharing [13.141672574114597]
We propose a confidence-based hierarchical classifier structure called "Reckoner" for reliable fair model learning under the assumption of missing sensitive attributes.
Our experimental results show that Reckoner consistently outperforms state-of-the-art baselines in COMPAS dataset and New Adult dataset.
arXiv Detail & Related papers (2024-09-27T06:16:14Z) - Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Causality and Independence Enhancement for Biased Node Classification [56.38828085943763]
We propose a novel Causality and Independence Enhancement (CIE) framework, applicable to various graph neural networks (GNNs)
Our approach estimates causal and spurious features at the node representation level and mitigates the influence of spurious correlations.
Our approach CIE not only significantly enhances the performance of GNNs but outperforms state-of-the-art debiased node classification methods.
arXiv Detail & Related papers (2023-10-14T13:56:24Z) - Feature-Level Debiased Natural Language Understanding [86.8751772146264]
Existing natural language understanding (NLU) models often rely on dataset biases to achieve high performance on specific datasets.
We propose debiasing contrastive learning (DCT) to mitigate biased latent features and neglect the dynamic nature of bias.
DCT outperforms state-of-the-art baselines on out-of-distribution datasets while maintaining in-distribution performance.
arXiv Detail & Related papers (2022-12-11T06:16:14Z) - Simultaneous Improvement of ML Model Fairness and Performance by
Identifying Bias in Data [1.76179873429447]
We propose a data preprocessing technique that can detect instances ascribing a specific kind of bias that should be removed from the dataset before training.
In particular, we claim that in the problem settings where instances exist with similar feature but different labels caused by variation in protected attributes, an inherent bias gets induced in the dataset.
arXiv Detail & Related papers (2022-10-24T13:04:07Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Mind the Trade-off: Debiasing NLU Models without Degrading the
In-distribution Performance [70.31427277842239]
We introduce a novel debiasing method called confidence regularization.
It discourages models from exploiting biases while enabling them to receive enough incentive to learn from all the training examples.
We evaluate our method on three NLU tasks and show that, in contrast to its predecessors, it improves the performance on out-of-distribution datasets.
arXiv Detail & Related papers (2020-05-01T11:22:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.