FaiR-N: Fair and Robust Neural Networks for Structured Data
- URL: http://arxiv.org/abs/2010.06113v1
- Date: Tue, 13 Oct 2020 01:53:15 GMT
- Title: FaiR-N: Fair and Robust Neural Networks for Structured Data
- Authors: Shubham Sharma, Alan H. Gee, David Paydarfar, Joydeep Ghosh
- Abstract summary: We present a novel formulation for training neural networks that considers the distance of data points to the decision boundary.
We show that training with this loss yields more fair and robust neural networks with similar accuracies to models trained without it.
- Score: 10.14835182649819
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness in machine learning is crucial when individuals are subject to
automated decisions made by models in high-stake domains. Organizations that
employ these models may also need to satisfy regulations that promote
responsible and ethical A.I. While fairness metrics relying on comparing model
error rates across subpopulations have been widely investigated for the
detection and mitigation of bias, fairness in terms of the equalized ability to
achieve recourse for different protected attribute groups has been relatively
unexplored. We present a novel formulation for training neural networks that
considers the distance of data points to the decision boundary such that the
new objective: (1) reduces the average distance to the decision boundary
between two groups for individuals subject to a negative outcome in each group,
i.e. the network is more fair with respect to the ability to obtain recourse,
and (2) increases the average distance of data points to the boundary to
promote adversarial robustness. We demonstrate that training with this loss
yields more fair and robust neural networks with similar accuracies to models
trained without it. Moreover, we qualitatively motivate and empirically show
that reducing recourse disparity across groups also improves fairness measures
that rely on error rates. To the best of our knowledge, this is the first time
that recourse capabilities across groups are considered to train fairer neural
networks, and a relation between error rates based fairness and recourse based
fairness is investigated.
Related papers
- Causal Fairness-Guided Dataset Reweighting using Neural Networks [25.492273448917278]
We construct a reweighting scheme of datasets to address causal fairness.
Our approach aims at mitigating bias by considering the causal relationships among variables.
We show that our method can achieve causal fairness on the data while remaining close to the original data for downstream tasks.
arXiv Detail & Related papers (2023-11-17T13:31:19Z) - Preventing Arbitrarily High Confidence on Far-Away Data in Point-Estimated Discriminative Neural Networks [28.97655735976179]
ReLU networks have been shown to almost always yield high confidence predictions when the test data are far away from the training set.
We overcome this problem by adding a term to the output of the neural network that corresponds to the logit of an extra class.
This technique provably prevents arbitrarily high confidence on far-away test data while maintaining a simple discriminative point-estimate training.
arXiv Detail & Related papers (2023-11-07T03:19:16Z) - Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - Population-Based Evolutionary Gaming for Unsupervised Person
Re-identification [26.279581599246224]
Unsupervised person re-identification has achieved great success through the self-improvement of individual neural networks.
We develop a population-based evolutionary gaming (PEG) framework in which a population of diverse neural networks is trained concurrently through selection, reproduction, mutation, and population mutual learning.
PEG produces new state-of-the-art accuracy for person re-identification, indicating the great potential of population-based network cooperative training for unsupervised learning.
arXiv Detail & Related papers (2023-06-08T14:33:41Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Pushing the Accuracy-Group Robustness Frontier with Introspective
Self-play [16.262574174989698]
Introspective Self-play (ISP) is a simple approach to improve the uncertainty estimation of a deep neural network under dataset bias.
We show that ISP provably improves the bias-awareness of the model representation and the resulting uncertainty estimates.
arXiv Detail & Related papers (2023-02-11T22:59:08Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Probabilistic Verification of Neural Networks Against Group Fairness [21.158245095699456]
We propose an approach to formally verify neural networks against fairness.
Our method is built upon an approach for learning Markov Chains from a user-provided neural network.
We demonstrate that with our analysis results, the neural weights can be optimized to improve fairness.
arXiv Detail & Related papers (2021-07-18T04:34:31Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.