FAIRER: Fairness as Decision Rationale Alignment
- URL: http://arxiv.org/abs/2306.15299v1
- Date: Tue, 27 Jun 2023 08:37:57 GMT
- Title: FAIRER: Fairness as Decision Rationale Alignment
- Authors: Tianlin Li, Qing Guo, Aishan Liu, Mengnan Du, Zhiming Li, Yang Liu
- Abstract summary: Deep neural networks (DNNs) have made significant progress, but often suffer from fairness issues.
It is unclear how the trained network makes a fair prediction, which limits future fairness improvements.
We propose gradient-guided parity alignment, which encourages gradient-weighted consistency of neurons across subgroups.
- Score: 23.098752318439782
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) have made significant progress, but often suffer
from fairness issues, as deep models typically show distinct accuracy
differences among certain subgroups (e.g., males and females). Existing
research addresses this critical issue by employing fairness-aware loss
functions to constrain the last-layer outputs and directly regularize DNNs.
Although the fairness of DNNs is improved, it is unclear how the trained
network makes a fair prediction, which limits future fairness improvements. In
this paper, we investigate fairness from the perspective of decision rationale
and define the parameter parity score to characterize the fair decision process
of networks by analyzing neuron influence in various subgroups. Extensive
empirical studies show that the unfair issue could arise from the unaligned
decision rationales of subgroups. Existing fairness regularization terms fail
to achieve decision rationale alignment because they only constrain last-layer
outputs while ignoring intermediate neuron alignment. To address the issue, we
formulate the fairness as a new task, i.e., decision rationale alignment that
requires DNNs' neurons to have consistent responses on subgroups at both
intermediate processes and the final prediction. To make this idea practical
during optimization, we relax the naive objective function and propose
gradient-guided parity alignment, which encourages gradient-weighted
consistency of neurons across subgroups. Extensive experiments on a variety of
datasets show that our method can significantly enhance fairness while
sustaining a high level of accuracy and outperforming other approaches by a
wide margin.
Related papers
- NeuFair: Neural Network Fairness Repair with Dropout [19.49034966552718]
This paper investigates neuron dropout as a post-processing bias mitigation for deep neural networks (DNNs)
We show that our design of randomized algorithms is effective and efficient in improving fairness (up to 69%) with minimal or no model performance degradation.
arXiv Detail & Related papers (2024-07-05T05:45:34Z) - What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - MAPPING: Debiasing Graph Neural Networks for Fair Node Classification
with Limited Sensitive Information Leakage [1.8238848494579714]
We propose a novel model-agnostic debiasing framework named MAPPING for fair node classification.
Our results show that MAPPING can achieve better trade-offs between utility and fairness, and privacy risks of sensitive information leakage.
arXiv Detail & Related papers (2024-01-23T14:59:46Z) - Marginal Debiased Network for Fair Visual Recognition [59.05212866862219]
We propose a novel marginal debiased network (MDN) to learn debiased representations.
Our MDN can achieve a remarkable performance on under-represented samples.
arXiv Detail & Related papers (2024-01-04T08:57:09Z) - Continual Learning via Sequential Function-Space Variational Inference [65.96686740015902]
We propose an objective derived by formulating continual learning as sequential function-space variational inference.
Compared to objectives that directly regularize neural network predictions, the proposed objective allows for more flexible variational distributions.
We demonstrate that, across a range of task sequences, neural networks trained via sequential function-space variational inference achieve better predictive accuracy than networks trained with related methods.
arXiv Detail & Related papers (2023-12-28T18:44:32Z) - FairNorm: Fair and Fast Graph Neural Network Training [9.492903649862761]
Graph neural networks (GNNs) have been demonstrated to achieve state-of-the-art for a number of graph-based learning tasks.
It has been shown that GNNs may inherit and even amplify bias within training data, which leads to unfair results towards certain sensitive groups.
This work proposes FairNorm, a unified normalization framework that reduces the bias in GNN-based learning.
arXiv Detail & Related papers (2022-05-20T06:10:27Z) - Debiased Graph Neural Networks with Agnostic Label Selection Bias [59.61301255860836]
Most existing Graph Neural Networks (GNNs) are proposed without considering the selection bias in data.
We propose a novel Debiased Graph Neural Networks (DGNN) with a differentiated decorrelation regularizer.
Our proposed model outperforms the state-of-the-art methods and DGNN is a flexible framework to enhance existing GNNs.
arXiv Detail & Related papers (2022-01-19T16:50:29Z) - FaiR-N: Fair and Robust Neural Networks for Structured Data [10.14835182649819]
We present a novel formulation for training neural networks that considers the distance of data points to the decision boundary.
We show that training with this loss yields more fair and robust neural networks with similar accuracies to models trained without it.
arXiv Detail & Related papers (2020-10-13T01:53:15Z) - Fairness Through Robustness: Investigating Robustness Disparity in Deep
Learning [61.93730166203915]
We argue that traditional notions of fairness are not sufficient when the model is vulnerable to adversarial attacks.
We show that measuring robustness bias is a challenging task for DNNs and propose two methods to measure this form of bias.
arXiv Detail & Related papers (2020-06-17T22:22:24Z) - Hold me tight! Influence of discriminative features on deep network
boundaries [63.627760598441796]
We propose a new perspective that relates dataset features to the distance of samples to the decision boundary.
This enables us to carefully tweak the position of the training samples and measure the induced changes on the boundaries of CNNs trained on large-scale vision datasets.
arXiv Detail & Related papers (2020-02-15T09:29:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.