Partition-and-Debias: Agnostic Biases Mitigation via A Mixture of
Biases-Specific Experts
- URL: http://arxiv.org/abs/2308.10005v1
- Date: Sat, 19 Aug 2023 13:11:40 GMT
- Title: Partition-and-Debias: Agnostic Biases Mitigation via A Mixture of
Biases-Specific Experts
- Authors: Jiaxuan Li, Duc Minh Vo, Hideki Nakayama
- Abstract summary: We present the Partition-and-Debias (PnD) method that uses a mixture of biases-specific experts to implicitly divide the bias space into multiple subspaces.
Experiments on both public and constructed benchmarks demonstrated the efficacy of the PnD.
- Score: 24.055919128977195
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bias mitigation in image classification has been widely researched, and
existing methods have yielded notable results. However, most of these methods
implicitly assume that a given image contains only one type of known or unknown
bias, failing to consider the complexities of real-world biases. We introduce a
more challenging scenario, agnostic biases mitigation, aiming at bias removal
regardless of whether the type of bias or the number of types is unknown in the
datasets. To address this difficult task, we present the Partition-and-Debias
(PnD) method that uses a mixture of biases-specific experts to implicitly
divide the bias space into multiple subspaces and a gating module to find a
consensus among experts to achieve debiased classification. Experiments on both
public and constructed benchmarks demonstrated the efficacy of the PnD. Code is
available at: https://github.com/Jiaxuan-Li/PnD.
Related papers
- CosFairNet:A Parameter-Space based Approach for Bias Free Learning [1.9116784879310025]
Deep neural networks trained on biased data often inadvertently learn unintended inference rules.
We introduce a novel approach to address bias directly in the model's parameter space, preventing its propagation across layers.
We show enhanced classification accuracy and debiasing effectiveness across various synthetic and real-world datasets.
arXiv Detail & Related papers (2024-10-19T13:06:40Z) - Language-guided Detection and Mitigation of Unknown Dataset Bias [23.299264313976213]
We propose a framework to identify potential biases as keywords without prior knowledge based on the partial occurrence in the captions.
Our framework not only outperforms existing methods without prior knowledge, but also is even comparable with a method that assumes prior knowledge.
arXiv Detail & Related papers (2024-06-05T03:11:33Z) - Going Beyond Popularity and Positivity Bias: Correcting for Multifactorial Bias in Recommender Systems [74.47680026838128]
Two typical forms of bias in user interaction data with recommender systems (RSs) are popularity bias and positivity bias.
We consider multifactorial selection bias affected by both item and rating value factors.
We propose smoothing and alternating gradient descent techniques to reduce variance and improve the robustness of its optimization.
arXiv Detail & Related papers (2024-04-29T12:18:21Z) - Is There a One-Model-Fits-All Approach to Information Extraction? Revisiting Task Definition Biases [62.806300074459116]
Definition bias is a negative phenomenon that can mislead models.
We identify two types of definition bias in IE: bias among information extraction datasets and bias between information extraction datasets and instruction tuning datasets.
We propose a multi-stage framework consisting of definition bias measurement, bias-aware fine-tuning, and task-specific bias mitigation.
arXiv Detail & Related papers (2024-03-25T03:19:20Z) - Take Care of Your Prompt Bias! Investigating and Mitigating Prompt Bias in Factual Knowledge Extraction [56.17020601803071]
Recent research shows that pre-trained language models (PLMs) suffer from "prompt bias" in factual knowledge extraction.
This paper aims to improve the reliability of existing benchmarks by thoroughly investigating and mitigating prompt bias.
arXiv Detail & Related papers (2024-03-15T02:04:35Z) - Revisiting the Dataset Bias Problem from a Statistical Perspective [72.94990819287551]
We study the "dataset bias" problem from a statistical standpoint.
We identify the main cause of the problem as the strong correlation between a class attribute u and a non-class attribute b.
We propose to mitigate dataset bias via either weighting the objective of each sample n by frac1p(u_n|b_n) or sampling that sample with a weight proportional to frac1p(u_n|b_n).
arXiv Detail & Related papers (2024-02-05T22:58:06Z) - Causality and Independence Enhancement for Biased Node Classification [56.38828085943763]
We propose a novel Causality and Independence Enhancement (CIE) framework, applicable to various graph neural networks (GNNs)
Our approach estimates causal and spurious features at the node representation level and mitigates the influence of spurious correlations.
Our approach CIE not only significantly enhances the performance of GNNs but outperforms state-of-the-art debiased node classification methods.
arXiv Detail & Related papers (2023-10-14T13:56:24Z) - Discover and Mitigate Unknown Biases with Debiasing Alternate Networks [42.89260385194433]
We propose Debiasing Alternate Networks (DebiAN), which comprises two networks -- a Discoverer and a classifier.
DebiAN aims at unlearning the biases identified by the discoverer.
While previous works evaluate debiasing results in terms of a single bias, we create Multi-Color MNIST dataset to better benchmark mitigation of multiple biases.
arXiv Detail & Related papers (2022-07-20T17:59:51Z) - Information-Theoretic Bias Reduction via Causal View of Spurious
Correlation [71.9123886505321]
We propose an information-theoretic bias measurement technique through a causal interpretation of spurious correlation.
We present a novel debiasing framework against the algorithmic bias, which incorporates a bias regularization loss.
The proposed bias measurement and debiasing approaches are validated in diverse realistic scenarios.
arXiv Detail & Related papers (2022-01-10T01:19:31Z) - Fighting Fire with Fire: Contrastive Debiasing without Bias-free Data
via Generative Bias-transformation [31.944147533327058]
Contrastive Debiasing via Generative Bias-transformation (CDvG)
We propose a novel method, Contrastive Debiasing via Generative Bias-transformation (CDvG), which works without explicit bias labels or bias-free samples.
Our method demonstrates superior performance compared to prior approaches, especially when bias-free samples are scarce or absent.
arXiv Detail & Related papers (2021-12-02T07:16:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.