A Tale of Fairness Revisited: Beyond Adversarial Learning for Deep
Neural Network Fairness
- URL: http://arxiv.org/abs/2101.02831v1
- Date: Fri, 8 Jan 2021 03:13:44 GMT
- Title: A Tale of Fairness Revisited: Beyond Adversarial Learning for Deep
Neural Network Fairness
- Authors: Becky Mashaido and Winston Moh Tangongho
- Abstract summary: Motivated by the need for fair algorithmic decision making in the age of automation and artificially-intelligent technology, this technical report provides a theoretical insight into adversarial training for fairness in deep learning.
We build upon previous work in adversarial fairness, show the persistent tradeoff between fair predictions and model performance, and explore further mechanisms that help in offsetting this tradeoff.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motivated by the need for fair algorithmic decision making in the age of
automation and artificially-intelligent technology, this technical report
provides a theoretical insight into adversarial training for fairness in deep
learning. We build upon previous work in adversarial fairness, show the
persistent tradeoff between fair predictions and model performance, and explore
further mechanisms that help in offsetting this tradeoff.
Related papers
- FairCompass: Operationalising Fairness in Machine Learning [34.964477625987136]
There is a growing imperative to develop responsible AI solutions.
Despite a diverse assortment of machine learning fairness solutions is proposed in the literature.
There is reportedly a lack of practical implementation of these tools in real-world applications.
arXiv Detail & Related papers (2023-12-27T21:29:53Z) - The Fairness Stitch: Unveiling the Potential of Model Stitching in
Neural Network De-Biasing [0.043512163406552]
This study introduces a novel method called "The Fairness Stitch" to enhance fairness in deep learning models.
We conduct a comprehensive evaluation of two well-known datasets, CelebA and UTKFace.
Our findings reveal a notable improvement in achieving a balanced trade-off between fairness and performance.
arXiv Detail & Related papers (2023-11-06T21:14:37Z) - Towards a General Framework for Continual Learning with Pre-training [55.88910947643436]
We present a general framework for continual learning of sequentially arrived tasks with the use of pre-training.
We decompose its objective into three hierarchical components, including within-task prediction, task-identity inference, and task-adaptive prediction.
We propose an innovative approach to explicitly optimize these components with parameter-efficient fine-tuning (PEFT) techniques and representation statistics.
arXiv Detail & Related papers (2023-10-21T02:03:38Z) - A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Last-Layer Fairness Fine-tuning is Simple and Effective for Neural
Networks [36.182644157139144]
We develop a framework to train fair neural networks in an efficient and inexpensive way.
Last-layer fine-tuning alone can effectively promote fairness in deep neural networks.
arXiv Detail & Related papers (2023-04-08T06:49:15Z) - Technical Challenges for Training Fair Neural Networks [62.466658247995404]
We conduct experiments on both facial recognition and automated medical diagnosis datasets using state-of-the-art architectures.
We observe that large models overfit to fairness objectives, and produce a range of unintended and undesirable consequences.
arXiv Detail & Related papers (2021-02-12T20:36:45Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - Improving Fair Predictions Using Variational Inference In Causal Models [8.557308138001712]
The importance of algorithmic fairness grows with the increasing impact machine learning has on people's lives.
Recent work on fairness metrics shows the need for causal reasoning in fairness constraints.
This research aims to contribute to machine learning techniques which honour our ethical and legal boundaries.
arXiv Detail & Related papers (2020-08-25T08:27:11Z) - FairALM: Augmented Lagrangian Method for Training Fair Models with
Little Regret [42.66567001275493]
It is now accepted that because of biases in the datasets we present to the models, a fairness-oblivious training will lead to unfair models.
Here, we study mechanisms that impose fairness concurrently while training the model.
arXiv Detail & Related papers (2020-04-03T03:18:53Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.