Probabilistic Verification of Neural Networks Against Group Fairness
- URL: http://arxiv.org/abs/2107.08362v1
- Date: Sun, 18 Jul 2021 04:34:31 GMT
- Title: Probabilistic Verification of Neural Networks Against Group Fairness
- Authors: Bing Sun, Jun Sun, Ting Dai, Lijun Zhang
- Abstract summary: We propose an approach to formally verify neural networks against fairness.
Our method is built upon an approach for learning Markov Chains from a user-provided neural network.
We demonstrate that with our analysis results, the neural weights can be optimized to improve fairness.
- Score: 21.158245095699456
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness is crucial for neural networks which are used in applications with
important societal implication. Recently, there have been multiple attempts on
improving fairness of neural networks, with a focus on fairness testing (e.g.,
generating individual discriminatory instances) and fairness training (e.g.,
enhancing fairness through augmented training). In this work, we propose an
approach to formally verify neural networks against fairness, with a focus on
independence-based fairness such as group fairness. Our method is built upon an
approach for learning Markov Chains from a user-provided neural network (i.e.,
a feed-forward neural network or a recurrent neural network) which is
guaranteed to facilitate sound analysis. The learned Markov Chain not only
allows us to verify (with Probably Approximate Correctness guarantee) whether
the neural network is fair or not, but also facilities sensitivity analysis
which helps to understand why fairness is violated. We demonstrate that with
our analysis results, the neural weights can be optimized to improve fairness.
Our approach has been evaluated with multiple models trained on benchmark
datasets and the experiment results show that our approach is effective and
efficient.
Related papers
- Last-Layer Fairness Fine-tuning is Simple and Effective for Neural
Networks [36.182644157139144]
We develop a framework to train fair neural networks in an efficient and inexpensive way.
Last-layer fine-tuning alone can effectively promote fairness in deep neural networks.
arXiv Detail & Related papers (2023-04-08T06:49:15Z) - Fairify: Fairness Verification of Neural Networks [7.673007415383724]
We propose Fairify, an approach to verify individual fairness property in neural network (NN) models.
Our approach adopts input partitioning and then prunes the NN for each partition to provide fairness certification or counterexample.
We evaluated Fairify on 25 real-world neural networks collected from four different sources.
arXiv Detail & Related papers (2022-12-08T23:31:06Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - FETA: Fairness Enforced Verifying, Training, and Predicting Algorithms
for Neural Networks [9.967054059014691]
We study the problem of verifying, training, and guaranteeing individual fairness of neural network models.
A popular approach for enforcing fairness is to translate a fairness notion into constraints over the parameters of the model.
We develop a counterexample-guided post-processing technique to provably enforce fairness constraints at prediction time.
arXiv Detail & Related papers (2022-06-01T15:06:11Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
on Pruned Neural Networks [79.74580058178594]
We analyze the performance of training a pruned neural network by analyzing the geometric structure of the objective function.
We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned.
arXiv Detail & Related papers (2021-10-12T01:11:07Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - FaiR-N: Fair and Robust Neural Networks for Structured Data [10.14835182649819]
We present a novel formulation for training neural networks that considers the distance of data points to the decision boundary.
We show that training with this loss yields more fair and robust neural networks with similar accuracies to models trained without it.
arXiv Detail & Related papers (2020-10-13T01:53:15Z) - Neural Networks and Value at Risk [59.85784504799224]
We perform Monte-Carlo simulations of asset returns for Value at Risk threshold estimation.
Using equity markets and long term bonds as test assets, we investigate neural networks.
We find our networks when fed with substantially less data to perform significantly worse.
arXiv Detail & Related papers (2020-05-04T17:41:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.