Technical Challenges for Training Fair Neural Networks
- URL: http://arxiv.org/abs/2102.06764v1
- Date: Fri, 12 Feb 2021 20:36:45 GMT
- Title: Technical Challenges for Training Fair Neural Networks
- Authors: Valeriia Cherepanova and Vedant Nanda and Micah Goldblum and John P.
Dickerson and Tom Goldstein
- Abstract summary: We conduct experiments on both facial recognition and automated medical diagnosis datasets using state-of-the-art architectures.
We observe that large models overfit to fairness objectives, and produce a range of unintended and undesirable consequences.
- Score: 62.466658247995404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As machine learning algorithms have been widely deployed across applications,
many concerns have been raised over the fairness of their predictions,
especially in high stakes settings (such as facial recognition and medical
imaging). To respond to these concerns, the community has proposed and
formalized various notions of fairness as well as methods for rectifying unfair
behavior. While fairness constraints have been studied extensively for
classical models, the effectiveness of methods for imposing fairness on deep
neural networks is unclear. In this paper, we observe that these large models
overfit to fairness objectives, and produce a range of unintended and
undesirable consequences. We conduct our experiments on both facial recognition
and automated medical diagnosis datasets using state-of-the-art architectures.
Related papers
- Biasing & Debiasing based Approach Towards Fair Knowledge Transfer for Equitable Skin Analysis [16.638722872021095]
We propose a two-biased teachers' based approach to transfer fair knowledge into the student network.
Our approach mitigates biases present in the student network without harming its predictive accuracy.
arXiv Detail & Related papers (2024-05-16T17:02:23Z) - The Fairness Stitch: Unveiling the Potential of Model Stitching in
Neural Network De-Biasing [0.043512163406552]
This study introduces a novel method called "The Fairness Stitch" to enhance fairness in deep learning models.
We conduct a comprehensive evaluation of two well-known datasets, CelebA and UTKFace.
Our findings reveal a notable improvement in achieving a balanced trade-off between fairness and performance.
arXiv Detail & Related papers (2023-11-06T21:14:37Z) - Toward Fairness Through Fair Multi-Exit Framework for Dermatological
Disease Diagnosis [16.493514215214983]
We develop a fairness-oriented framework for medical image recognition.
Our framework can improve the fairness condition over the state-of-the-art in two dermatological disease datasets.
arXiv Detail & Related papers (2023-06-26T08:48:39Z) - Last-Layer Fairness Fine-tuning is Simple and Effective for Neural
Networks [36.182644157139144]
We develop a framework to train fair neural networks in an efficient and inexpensive way.
Last-layer fine-tuning alone can effectively promote fairness in deep neural networks.
arXiv Detail & Related papers (2023-04-08T06:49:15Z) - Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition [107.58227666024791]
Face recognition systems are widely deployed in safety-critical applications, including law enforcement.
They exhibit bias across a range of socio-demographic dimensions, such as gender and race.
Previous works on bias mitigation largely focused on pre-processing the training data.
arXiv Detail & Related papers (2022-10-18T15:46:05Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - Domain Generalization for Medical Imaging Classification with
Linear-Dependency Regularization [59.5104563755095]
We introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification.
Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding.
arXiv Detail & Related papers (2020-09-27T12:30:30Z) - The FaceChannel: A Fast & Furious Deep Neural Network for Facial
Expression Recognition [71.24825724518847]
Current state-of-the-art models for automatic Facial Expression Recognition (FER) are based on very deep neural networks that are effective but rather expensive to train.
We formalize the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks.
We demonstrate how our model achieves a comparable, if not better, performance to the current state-of-the-art in FER.
arXiv Detail & Related papers (2020-09-15T09:25:37Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.