Mitigating Algorithmic Bias with Limited Annotations
- URL: http://arxiv.org/abs/2207.10018v2
- Date: Tue, 7 Feb 2023 17:03:44 GMT
- Title: Mitigating Algorithmic Bias with Limited Annotations
- Authors: Guanchu Wang and Mengnan Du and Ninghao Liu and Na Zou and Xia Hu
- Abstract summary: When sensitive attributes are not disclosed or available, it is needed to manually annotate a small part of the training data to mitigate bias.
We propose Active Penalization Of Discrimination (APOD), an interactive framework to guide the limited annotations towards maximally eliminating the effect of algorithmic bias.
APOD shows comparable performance to fully annotated bias mitigation, which demonstrates that APOD could benefit real-world applications when sensitive information is limited.
- Score: 65.060639928772
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing work on fairness modeling commonly assumes that sensitive attributes
for all instances are fully available, which may not be true in many real-world
applications due to the high cost of acquiring sensitive information. When
sensitive attributes are not disclosed or available, it is needed to manually
annotate a small part of the training data to mitigate bias. However, the
skewed distribution across different sensitive groups preserves the skewness of
the original dataset in the annotated subset, which leads to non-optimal bias
mitigation. To tackle this challenge, we propose Active Penalization Of
Discrimination (APOD), an interactive framework to guide the limited
annotations towards maximally eliminating the effect of algorithmic bias. The
proposed APOD integrates discrimination penalization with active instance
selection to efficiently utilize the limited annotation budget, and it is
theoretically proved to be capable of bounding the algorithmic bias. According
to the evaluation on five benchmark datasets, APOD outperforms the
state-of-the-arts baseline methods under the limited annotation budget, and
shows comparable performance to fully annotated bias mitigation, which
demonstrates that APOD could benefit real-world applications when sensitive
information is limited.
Related papers
- Provable Optimization for Adversarial Fair Self-supervised Contrastive Learning [49.417414031031264]
This paper studies learning fair encoders in a self-supervised learning setting.
All data are unlabeled and only a small portion of them are annotated with sensitive attributes.
arXiv Detail & Related papers (2024-06-09T08:11:12Z) - Enhancing Fairness in Unsupervised Graph Anomaly Detection through Disentanglement [33.565252991113766]
Graph anomaly detection (GAD) is increasingly crucial in various applications, ranging from financial fraud detection to fake news detection.
Current GAD methods largely overlook the fairness problem, which might result in discriminatory decisions skewed toward certain demographic groups.
We devise a novel DisEntangle-based FairnEss-aware aNomaly Detection framework on the attributed graph, named DEFEND.
Our empirical evaluations on real-world datasets reveal that DEFEND performs effectively in GAD and significantly enhances fairness compared to state-of-the-art baselines.
arXiv Detail & Related papers (2024-06-03T04:48:45Z) - Fairness Without Harm: An Influence-Guided Active Sampling Approach [32.173195437797766]
We aim to train models that mitigate group fairness disparity without causing harm to model accuracy.
The current data acquisition methods, such as fair active learning approaches, typically require annotating sensitive attributes.
We propose a tractable active data sampling algorithm that does not rely on training group annotations.
arXiv Detail & Related papers (2024-02-20T07:57:38Z) - Causality and Independence Enhancement for Biased Node Classification [56.38828085943763]
We propose a novel Causality and Independence Enhancement (CIE) framework, applicable to various graph neural networks (GNNs)
Our approach estimates causal and spurious features at the node representation level and mitigates the influence of spurious correlations.
Our approach CIE not only significantly enhances the performance of GNNs but outperforms state-of-the-art debiased node classification methods.
arXiv Detail & Related papers (2023-10-14T13:56:24Z) - D-CALM: A Dynamic Clustering-based Active Learning Approach for
Mitigating Bias [13.008323851750442]
In this paper, we propose a novel adaptive clustering-based active learning algorithm, D-CALM, that dynamically adjusts clustering and annotation efforts.
Experiments on eight datasets for a diverse set of text classification tasks, including emotion, hatespeech, dialog act, and book type detection, demonstrate that our proposed algorithm significantly outperforms baseline AL approaches.
arXiv Detail & Related papers (2023-05-26T15:17:43Z) - Open World Classification with Adaptive Negative Samples [89.2422451410507]
Open world classification is a task in natural language processing with key practical relevance and impact.
We propose an approach based on underlineadaptive underlinesamples (ANS) designed to generate effective synthetic open category samples in the training stage.
ANS achieves significant improvements over state-of-the-art methods.
arXiv Detail & Related papers (2023-03-09T21:12:46Z) - Unsupervised Learning of Debiased Representations with Pseudo-Attributes [85.5691102676175]
We propose a simple but effective debiasing technique in an unsupervised manner.
We perform clustering on the feature embedding space and identify pseudoattributes by taking advantage of the clustering results.
We then employ a novel cluster-based reweighting scheme for learning debiased representation.
arXiv Detail & Related papers (2021-08-06T05:20:46Z) - Fairness via Representation Neutralization [60.90373932844308]
We propose a new mitigation technique, namely, Representation Neutralization for Fairness (RNF)
RNF achieves that fairness by debiasing only the task-specific classification head of DNN models.
Experimental results over several benchmark datasets demonstrate our RNF framework to effectively reduce discrimination of DNN models.
arXiv Detail & Related papers (2021-06-23T22:26:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.