Addressing Fairness, Bias and Class Imbalance in Machine Learning: the
FBI-loss
- URL: http://arxiv.org/abs/2105.06345v1
- Date: Thu, 13 May 2021 15:01:14 GMT
- Title: Addressing Fairness, Bias and Class Imbalance in Machine Learning: the
FBI-loss
- Authors: Elisa Ferrari, Davide Bacciu
- Abstract summary: We propose a unified loss correction to address issues related to Fairness, Biases and Imbalances (FBI-loss)
The correction capabilities of the proposed approach are assessed on three real-world benchmarks.
- Score: 11.291571222801027
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Resilience to class imbalance and confounding biases, together with the
assurance of fairness guarantees are highly desirable properties of autonomous
decision-making systems with real-life impact. Many different targeted
solutions have been proposed to address separately these three problems,
however a unifying perspective seems to be missing. With this work, we provide
a general formalization, showing that they are different expressions of
unbalance. Following this intuition, we formulate a unified loss correction to
address issues related to Fairness, Biases and Imbalances (FBI-loss). The
correction capabilities of the proposed approach are assessed on three
real-world benchmarks, each associated to one of the issues under
consideration, and on a family of synthetic data in order to better investigate
the effectiveness of our loss on tasks with different complexities. The
empirical results highlight that the flexible formulation of the FBI-loss leads
also to competitive performances with respect to literature solutions
specialised for the single problems.
Related papers
- Flexible Fairness-Aware Learning via Inverse Conditional Permutation [0.0]
We introduce an in-processing fairness-aware learning approach, FairICP, which integrates adversarial learning with a novel inverse conditional permutation scheme.
We show that FairICP offers a theoretically justified, flexible, and efficient scheme to promote equalized odds under fairness conditions described by complex and multidimensional sensitive attributes.
arXiv Detail & Related papers (2024-04-08T16:57:44Z) - Delving into Identify-Emphasize Paradigm for Combating Unknown Bias [52.76758938921129]
We propose an effective bias-conflicting scoring method (ECS) to boost the identification accuracy.
We also propose gradient alignment (GA) to balance the contributions of the mined bias-aligned and bias-conflicting samples.
Experiments are conducted on multiple datasets in various settings, demonstrating that the proposed solution can mitigate the impact of unknown biases.
arXiv Detail & Related papers (2023-02-22T14:50:24Z) - Survey on Fairness Notions and Related Tensions [4.257210316104905]
Automated decision systems are increasingly used to take consequential decisions in problems such as job hiring and loan granting.
However, objective machine learning (ML) algorithms are prone to bias, which results in yet unfair decisions.
This paper surveys the commonly used fairness notions and discusses the tensions among them with privacy and accuracy.
arXiv Detail & Related papers (2022-09-16T13:36:05Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - AutoBalance: Optimized Loss Functions for Imbalanced Data [38.64606886588534]
We propose AutoBalance, a bi-level optimization framework that automatically designs a training loss function to optimize a blend of accuracy and fairness-seeking objectives.
Specifically, a lower-level problem trains the model weights, and an upper-level problem tunes the loss function by monitoring and optimizing the desired objective over the validation data.
Our loss design enables personalized treatment for classes/groups by employing a parametric cross-entropy loss and individualized data augmentation schemes.
arXiv Detail & Related papers (2022-01-04T15:53:23Z) - Model-Based Approach for Measuring the Fairness in ASR [11.076999352942954]
We introduce mixed-effects Poisson regression to better measure and interpret any WER difference among subgroups of interest.
We demonstrate the validity of proposed model-based approach on both synthetic and real-world speech data.
arXiv Detail & Related papers (2021-09-19T05:24:01Z) - Equality before the Law: Legal Judgment Consistency Analysis for
Fairness [55.91612739713396]
In this paper, we propose an evaluation metric for judgment inconsistency, Legal Inconsistency Coefficient (LInCo)
We simulate judges from different groups with legal judgment prediction (LJP) models and measure the judicial inconsistency with the disagreement of the judgment results given by LJP models trained on different groups.
We employ LInCo to explore the inconsistency in real cases and come to the following observations: (1) Both regional and gender inconsistency exist in the legal system, but gender inconsistency is much less than regional inconsistency.
arXiv Detail & Related papers (2021-03-25T14:28:00Z) - Supercharging Imbalanced Data Learning With Energy-based Contrastive
Representation Transfer [72.5190560787569]
In computer vision, learning from long tailed datasets is a recurring theme, especially for natural image datasets.
Our proposal posits a meta-distributional scenario, where the data generating mechanism is invariant across the label-conditional feature distributions.
This allows us to leverage a causal data inflation procedure to enlarge the representation of minority classes.
arXiv Detail & Related papers (2020-11-25T00:13:11Z) - Counterfactual Representation Learning with Balancing Weights [74.67296491574318]
Key to causal inference with observational data is achieving balance in predictive features associated with each treatment type.
Recent literature has explored representation learning to achieve this goal.
We develop an algorithm for flexible, scalable and accurate estimation of causal effects.
arXiv Detail & Related papers (2020-10-23T19:06:03Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.