BiMi Sheets: Infosheets for bias mitigation methods
- URL: http://arxiv.org/abs/2505.22114v1
- Date: Wed, 28 May 2025 08:41:35 GMT
- Title: BiMi Sheets: Infosheets for bias mitigation methods
- Authors: MaryBeth Defrance, Guillaume Bied, Maarten Buyl, Jefrey Lijffijt, Tijl De Bie,
- Abstract summary: Hundreds of bias mitigation methods have been proposed in the pursuit of fairness in machine learning (ML)<n>We propose BiMi Sheets as a portable, uniform guide to document the design choices of any bias mitigation method.
- Score: 13.821369999986999
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Over the past 15 years, hundreds of bias mitigation methods have been proposed in the pursuit of fairness in machine learning (ML). However, algorithmic biases are domain-, task-, and model-specific, leading to a `portability trap': bias mitigation solutions in one context may not be appropriate in another. Thus, a myriad of design choices have to be made when creating a bias mitigation method, such as the formalization of fairness it pursues, and where and how it intervenes in the ML pipeline. This creates challenges in benchmarking and comparing the relative merits of different bias mitigation methods, and limits their uptake by practitioners. We propose BiMi Sheets as a portable, uniform guide to document the design choices of any bias mitigation method. This enables researchers and practitioners to quickly learn its main characteristics and to compare with their desiderata. Furthermore, the sheets' structure allow for the creation of a structured database of bias mitigation methods. In order to foster the sheets' adoption, we provide a platform for finding and creating BiMi Sheets at bimisheet.com.
Related papers
- A Principled Approach to Randomized Selection under Uncertainty: Applications to Peer Review and Grant Funding [68.43987626137512]
We propose a principled framework for randomized decision-making based on interval estimates of the quality of each item.<n>We introduce MERIT, an optimization-based method that maximizes the worst-case expected number of top candidates selected.<n>We prove that MERIT satisfies desirable axiomatic properties not guaranteed by existing approaches.
arXiv Detail & Related papers (2025-06-23T19:59:30Z) - ALBAR: Adversarial Learning approach to mitigate Biases in Action Recognition [52.537021302246664]
Action recognition models often suffer from background bias (i.e., inferring actions based on background cues) and foreground bias (i.e., relying on subject appearance)<n>We propose ALBAR, a novel adversarial training method that mitigates foreground and background biases without requiring specialized knowledge of the bias attributes.<n>We evaluate our method on established background and foreground bias protocols, setting a new state-of-the-art and strongly improving combined debiasing performance by over 12% absolute on HMDB51.
arXiv Detail & Related papers (2025-01-31T20:47:06Z) - Disperse-Then-Merge: Pushing the Limits of Instruction Tuning via Alignment Tax Reduction [75.25114727856861]
Large language models (LLMs) tend to suffer from deterioration at the latter stage ofSupervised fine-tuning process.
We introduce a simple disperse-then-merge framework to address the issue.
Our framework outperforms various sophisticated methods such as data curation and training regularization on a series of standard knowledge and reasoning benchmarks.
arXiv Detail & Related papers (2024-05-22T08:18:19Z) - How to be fair? A study of label and selection bias [3.018638214344819]
It is widely accepted that biased data leads to biased and potentially unfair models.
Several measures for bias in data and model predictions have been proposed, as well as bias mitigation techniques.
Despite the myriad of mitigation techniques developed in the past decade, it is still poorly understood under what circumstances which methods work.
arXiv Detail & Related papers (2024-03-21T10:43:55Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Bias Mimicking: A Simple Sampling Approach for Bias Mitigation [57.17709477668213]
We introduce a new class-conditioned sampling method: Bias Mimicking.
Bias Mimicking improves underrepresented groups' accuracy of sampling methods by 3% over four benchmarks.
arXiv Detail & Related papers (2022-09-30T17:33:00Z) - Selection of a representative sorting model in a preference
disaggregation setting: a review of existing procedures, new proposals, and
experimental comparison [4.447467536572626]
We consider preference disaggregation in the context of multiple criteria sorting.
Given the multiplicity of sorting models compatible with indirect preferences, selecting a single, representative one can be conducted differently.
We present three novel procedures that implement the robust assignment rule in practice.
arXiv Detail & Related papers (2022-08-30T02:01:35Z) - Bias Mitigation for Machine Learning Classifiers: A Comprehensive Survey [30.637712832450525]
We collect a total of 341 publications concerning bias mitigation for ML classifiers.
We investigate how existing bias mitigation methods are evaluated in the literature.
Based on the gathered insights, we hope to support practitioners in making informed choices when developing and evaluating new bias mitigation methods.
arXiv Detail & Related papers (2022-07-14T17:16:45Z) - Modular and On-demand Bias Mitigation with Attribute-Removal Subnetworks [10.748627178113418]
We propose a novel modular bias mitigation approach, consisting of stand-alone highly sparse debiasingworks.
We conduct experiments on three classification tasks with gender, race, and age as protected attributes.
arXiv Detail & Related papers (2022-05-30T15:21:25Z) - A Sandbox Tool to Bias(Stress)-Test Fairness Algorithms [19.86635585740634]
We present the conceptual idea and a first implementation of a bias-injection sandbox tool to investigate fairness consequences of various biases.
Unlike existing toolkits, ours provides a controlled environment to counterfactually inject biases in the ML pipeline.
In particular, we can test whether a given remedy can alleviate the injected bias by comparing the predictions resulting after the intervention with true labels in the unbiased regime-that is, before any bias injection.
arXiv Detail & Related papers (2022-04-21T16:12:19Z) - Repairing Regressors for Fair Binary Classification at Any Decision
Threshold [8.322348511450366]
We show that we can increase fair performance across all thresholds at once.
We introduce a formal measure of Distributional Parity, which captures the degree of similarity in the distributions of classifications for different protected groups.
Our main result is to put forward a novel post-processing algorithm based on optimal transport, which provably maximizes Distributional Parity.
arXiv Detail & Related papers (2022-03-14T20:53:35Z) - Information-Theoretic Bias Reduction via Causal View of Spurious
Correlation [71.9123886505321]
We propose an information-theoretic bias measurement technique through a causal interpretation of spurious correlation.
We present a novel debiasing framework against the algorithmic bias, which incorporates a bias regularization loss.
The proposed bias measurement and debiasing approaches are validated in diverse realistic scenarios.
arXiv Detail & Related papers (2022-01-10T01:19:31Z) - Relieving Long-tailed Instance Segmentation via Pairwise Class Balance [85.53585498649252]
Long-tailed instance segmentation is a challenging task due to the extreme imbalance of training samples among classes.
It causes severe biases of the head classes (with majority samples) against the tailed ones.
We propose a novel Pairwise Class Balance (PCB) method, built upon a confusion matrix which is updated during training to accumulate the ongoing prediction preferences.
arXiv Detail & Related papers (2022-01-08T07:48:36Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.