Muffin: A Framework Toward Multi-Dimension AI Fairness by Uniting
Off-the-Shelf Models
- URL: http://arxiv.org/abs/2308.13730v1
- Date: Sat, 26 Aug 2023 02:04:10 GMT
- Title: Muffin: A Framework Toward Multi-Dimension AI Fairness by Uniting
Off-the-Shelf Models
- Authors: Yi Sheng, Junhuan Yang, Lei Yang, Yiyu Shi, Jingtongf Hu, Weiwen Jiang
- Abstract summary: Model fairness (a.k.a. bias) has become one of the most critical problems in a wide range of AI applications.
We propose a novel Multi- Dimension Fairness framework, namely Muffin, which includes an automatic tool to unite off-the-shelf models to improve the fairness on multiple attributes simultaneously.
- Score: 9.01924639426239
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model fairness (a.k.a., bias) has become one of the most critical problems in
a wide range of AI applications. An unfair model in autonomous driving may
cause a traffic accident if corner cases (e.g., extreme weather) cannot be
fairly regarded; or it will incur healthcare disparities if the AI model
misdiagnoses a certain group of people (e.g., brown and black skin). In recent
years, there have been emerging research works on addressing unfairness, and
they mainly focus on a single unfair attribute, like skin tone; however,
real-world data commonly have multiple attributes, among which unfairness can
exist in more than one attribute, called 'multi-dimensional fairness'. In this
paper, we first reveal a strong correlation between the different unfair
attributes, i.e., optimizing fairness on one attribute will lead to the
collapse of others. Then, we propose a novel Multi-Dimension Fairness
framework, namely Muffin, which includes an automatic tool to unite
off-the-shelf models to improve the fairness on multiple attributes
simultaneously. Case studies on dermatology datasets with two unfair attributes
show that the existing approach can achieve 21.05% fairness improvement on the
first attribute while it makes the second attribute unfair by 1.85%. On the
other hand, the proposed Muffin can unite multiple models to achieve
simultaneously 26.32% and 20.37% fairness improvement on both attributes;
meanwhile, it obtains 5.58% accuracy gain.
Related papers
- What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - Distributionally Generative Augmentation for Fair Facial Attribute Classification [69.97710556164698]
Facial Attribute Classification (FAC) holds substantial promise in widespread applications.
FAC models trained by traditional methodologies can be unfair by exhibiting accuracy inconsistencies across varied data subpopulations.
This work proposes a novel, generation-based two-stage framework to train a fair FAC model on biased data without additional annotation.
arXiv Detail & Related papers (2024-03-11T10:50:53Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - A Differentiable Distance Approximation for Fairer Image Classification [31.471917430653626]
We propose a differentiable approximation of the variance of demographics, a metric that can be used to measure the bias, or unfairness, in an AI model.
Our approximation can be optimised alongside the regular training objective which eliminates the need for any extra models during training.
We demonstrate that our approach improves the fairness of AI models in varied task and dataset scenarios.
arXiv Detail & Related papers (2022-10-09T23:02:18Z) - Learning Fair Node Representations with Graph Counterfactual Fairness [56.32231787113689]
We propose graph counterfactual fairness, which considers the biases led by the above facts.
We generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
Our framework outperforms the state-of-the-art baselines in graph counterfactual fairness.
arXiv Detail & Related papers (2022-01-10T21:43:44Z) - Developing a novel fair-loan-predictor through a multi-sensitive
debiasing pipeline: DualFair [2.149265948858581]
We create a novel bias mitigation technique called DualFair and develop a new fairness metric (i.e. AWI) that can handle MSPSO.
We test our novel mitigation method using a comprehensive U.S mortgage lending dataset and show that our classifier, or fair loan predictor, obtains better fairness and accuracy metrics than current state-of-the-art models.
arXiv Detail & Related papers (2021-10-17T23:13:43Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Multi-Fair Pareto Boosting [7.824964622317634]
We introduce a new fairness notion,Multi-Max Mistreatment(MMM), which measures unfairness while considering both (multi-attribute) protected group and class membership of instances.
We solve the problem using a boosting approach that in-training,incorporates multi-fairness treatment in the distribution update and post-training.
arXiv Detail & Related papers (2021-04-27T16:37:35Z) - Adversarial Learning for Counterfactual Fairness [15.302633901803526]
In recent years, fairness has become an important topic in the machine learning research community.
We propose to rely on an adversarial neural learning approach, that enables more powerful inference than with MMD penalties.
Experiments show significant improvements in term of counterfactual fairness for both the discrete and the continuous settings.
arXiv Detail & Related papers (2020-08-30T09:06:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.