Fair Multi-Exit Framework for Facial Attribute Classification
- URL: http://arxiv.org/abs/2301.02989v1
- Date: Sun, 8 Jan 2023 06:18:51 GMT
- Title: Fair Multi-Exit Framework for Facial Attribute Classification
- Authors: Ching-Hao Chiu, Hao-Wei Chung, Yu-Jen Chen, Yiyu Shi, Tsung-Yi Ho
- Abstract summary: In this paper, we extend the concept of multi-exit framework.
Unlike existing works mainly focusing on accuracy, our multi-exit framework is fairness-oriented.
Experiment results show that the proposed framework can largely improve the fairness condition over the state-of-the-art in CelebA and UTK Face datasets.
- Score: 16.493514215214983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness has become increasingly pivotal in facial recognition. Without bias
mitigation, deploying unfair AI would harm the interest of the underprivileged
population. In this paper, we observe that though the higher accuracy that
features from the deeper layer of a neural networks generally offer, fairness
conditions deteriorate as we extract features from deeper layers. This
phenomenon motivates us to extend the concept of multi-exit framework. Unlike
existing works mainly focusing on accuracy, our multi-exit framework is
fairness-oriented, where the internal classifiers are trained to be more
accurate and fairer. During inference, any instance with high confidence from
an internal classifier is allowed to exit early. Moreover, our framework can be
applied to most existing fairness-aware frameworks. Experiment results show
that the proposed framework can largely improve the fairness condition over the
state-of-the-art in CelebA and UTK Face datasets.
Related papers
- Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - Fairness Explainability using Optimal Transport with Applications in
Image Classification [0.46040036610482665]
We propose a comprehensive approach to uncover the causes of discrimination in Machine Learning applications.
We leverage Wasserstein barycenters to achieve fair predictions and introduce an extension to pinpoint bias-associated regions.
This allows us to derive a cohesive system which uses the enforced fairness to measure each features influence emphon the bias.
arXiv Detail & Related papers (2023-08-22T00:10:23Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Toward Fairness Through Fair Multi-Exit Framework for Dermatological
Disease Diagnosis [16.493514215214983]
We develop a fairness-oriented framework for medical image recognition.
Our framework can improve the fairness condition over the state-of-the-art in two dermatological disease datasets.
arXiv Detail & Related papers (2023-06-26T08:48:39Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Fair Contrastive Learning for Facial Attribute Classification [25.436462696033846]
We propose a new Fair Supervised Contrastive Loss (FSCL) for fair visual representation learning.
In this paper, we for the first time analyze unfairness caused by supervised contrastive learning.
Our method is robust to the intensity of data bias and effectively works in incomplete supervised settings.
arXiv Detail & Related papers (2022-03-30T11:16:18Z) - Fairness without the sensitive attribute via Causal Variational
Autoencoder [17.675997789073907]
Due to privacy purposes and var-ious regulations such as RGPD in EU, many personal sensitive attributes are frequently not collected.
By leveraging recent developments for approximate inference, we propose an approach to fill this gap.
Based on a causal graph, we rely on a new variational auto-encoding based framework named SRCVAE to infer a sensitive information proxy.
arXiv Detail & Related papers (2021-09-10T17:12:52Z) - Fair Normalizing Flows [10.484851004093919]
We present Fair Normalizing Flows (FNF), a new approach offering more rigorous fairness guarantees for learned representations.
The main advantage of FNF is that its exact likelihood computation allows us to obtain guarantees on the maximum unfairness of any potentially adversarial downstream predictor.
We experimentally demonstrate the effectiveness of FNF in enforcing various group fairness notions, as well as other attractive properties such as interpretability and transfer learning.
arXiv Detail & Related papers (2021-06-10T17:35:59Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Technical Challenges for Training Fair Neural Networks [62.466658247995404]
We conduct experiments on both facial recognition and automated medical diagnosis datasets using state-of-the-art architectures.
We observe that large models overfit to fairness objectives, and produce a range of unintended and undesirable consequences.
arXiv Detail & Related papers (2021-02-12T20:36:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.