CertiFair: A Framework for Certified Global Fairness of Neural Networks
- URL: http://arxiv.org/abs/2205.09927v1
- Date: Fri, 20 May 2022 02:08:47 GMT
- Title: CertiFair: A Framework for Certified Global Fairness of Neural Networks
- Authors: Haitham Khedr and Yasser Shoukry
- Abstract summary: Individual Fairness suggests that similar individuals with respect to a certain task are to be treated similarly by a Neural Network (NN) model.
We construct a verifier which checks whether the fairness property holds for a given NN in a classification task.
We then provide provable bounds on the fairness of the resulting NN.
- Score: 1.4620086904601473
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of whether a Neural Network (NN) model satisfies
global individual fairness. Individual Fairness suggests that similar
individuals with respect to a certain task are to be treated similarly by the
decision model. In this work, we have two main objectives. The first is to
construct a verifier which checks whether the fairness property holds for a
given NN in a classification task or provide a counterexample if it is
violated, i.e., the model is fair if all similar individuals are classified the
same, and unfair if a pair of similar individuals are classified differently.
To that end, We construct a sound and complete verifier that verifies global
individual fairness properties of ReLU NN classifiers using distance-based
similarity metrics. The second objective of this paper is to provide a method
for training provably fair NN classifiers from unfair (biased) data. We propose
a fairness loss that can be used during training to enforce fair outcomes for
similar individuals. We then provide provable bounds on the fairness of the
resulting NN. We run experiments on commonly used fairness datasets that are
publicly available and we show that global individual fairness can be improved
by 96 % without significant drop in test accuracy.
Related papers
- Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - GFairHint: Improving Individual Fairness for Graph Neural Networks via Fairness Hint [28.70963753478329]
algorithmic fairness in Graph Neural Networks (GNNs) has attracted significant attention.
We propose a novel method, GFairHint, which promotes individual fairness in GNNs.
GFairHint achieves the best fairness results in almost all combinations of datasets with various backbone models.
arXiv Detail & Related papers (2023-05-25T00:03:22Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Fairify: Fairness Verification of Neural Networks [7.673007415383724]
We propose Fairify, an approach to verify individual fairness property in neural network (NN) models.
Our approach adopts input partitioning and then prunes the NN for each partition to provide fairness certification or counterexample.
We evaluated Fairify on 25 real-world neural networks collected from four different sources.
arXiv Detail & Related papers (2022-12-08T23:31:06Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - FETA: Fairness Enforced Verifying, Training, and Predicting Algorithms
for Neural Networks [9.967054059014691]
We study the problem of verifying, training, and guaranteeing individual fairness of neural network models.
A popular approach for enforcing fairness is to translate a fairness notion into constraints over the parameters of the model.
We develop a counterexample-guided post-processing technique to provably enforce fairness constraints at prediction time.
arXiv Detail & Related papers (2022-06-01T15:06:11Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Learning Fair Node Representations with Graph Counterfactual Fairness [56.32231787113689]
We propose graph counterfactual fairness, which considers the biases led by the above facts.
We generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
Our framework outperforms the state-of-the-art baselines in graph counterfactual fairness.
arXiv Detail & Related papers (2022-01-10T21:43:44Z) - Metric-Free Individual Fairness with Cooperative Contextual Bandits [17.985752744098267]
Group fairness requires that different groups should be treated similarly which might be unfair to some individuals within a group.
Individual fairness remains understudied due to its reliance on problem-specific similarity metrics.
We propose a metric-free individual fairness and a cooperative contextual bandits algorithm.
arXiv Detail & Related papers (2020-11-13T03:10:35Z) - Fairness Through Robustness: Investigating Robustness Disparity in Deep
Learning [61.93730166203915]
We argue that traditional notions of fairness are not sufficient when the model is vulnerable to adversarial attacks.
We show that measuring robustness bias is a challenging task for DNNs and propose two methods to measure this form of bias.
arXiv Detail & Related papers (2020-06-17T22:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.