Fairify: Fairness Verification of Neural Networks
- URL: http://arxiv.org/abs/2212.06140v2
- Date: Wed, 14 Dec 2022 02:18:09 GMT
- Title: Fairify: Fairness Verification of Neural Networks
- Authors: Sumon Biswas and Hridesh Rajan
- Abstract summary: We propose Fairify, an approach to verify individual fairness property in neural network (NN) models.
Our approach adopts input partitioning and then prunes the NN for each partition to provide fairness certification or counterexample.
We evaluated Fairify on 25 real-world neural networks collected from four different sources.
- Score: 7.673007415383724
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fairness of machine learning (ML) software has become a major concern in the
recent past. Although recent research on testing and improving fairness have
demonstrated impact on real-world software, providing fairness guarantee in
practice is still lacking. Certification of ML models is challenging because of
the complex decision-making process of the models. In this paper, we proposed
Fairify, an SMT-based approach to verify individual fairness property in neural
network (NN) models. Individual fairness ensures that any two similar
individuals get similar treatment irrespective of their protected attributes
e.g., race, sex, age. Verifying this fairness property is hard because of the
global checking and non-linear computation nodes in NN. We proposed sound
approach to make individual fairness verification tractable for the developers.
The key idea is that many neurons in the NN always remain inactive when a
smaller part of the input domain is considered. So, Fairify leverages whitebox
access to the models in production and then apply formal analysis based
pruning. Our approach adopts input partitioning and then prunes the NN for each
partition to provide fairness certification or counterexample. We leveraged
interval arithmetic and activation heuristic of the neurons to perform the
pruning as necessary. We evaluated Fairify on 25 real-world neural networks
collected from four different sources, and demonstrated the effectiveness,
scalability and performance over baseline and closely related work. Fairify is
also configurable based on the domain and size of the NN. Our novel formulation
of the problem can answer targeted verification queries with relaxations and
counterexamples, which have practical implications.
Related papers
- NeuFair: Neural Network Fairness Repair with Dropout [19.49034966552718]
This paper investigates neuron dropout as a post-processing bias mitigation for deep neural networks (DNNs)
We show that our design of randomized algorithms is effective and efficient in improving fairness (up to 69%) with minimal or no model performance degradation.
arXiv Detail & Related papers (2024-07-05T05:45:34Z) - MAPPING: Debiasing Graph Neural Networks for Fair Node Classification
with Limited Sensitive Information Leakage [1.8238848494579714]
We propose a novel model-agnostic debiasing framework named MAPPING for fair node classification.
Our results show that MAPPING can achieve better trade-offs between utility and fairness, and privacy risks of sensitive information leakage.
arXiv Detail & Related papers (2024-01-23T14:59:46Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Sound and Complete Verification of Polynomial Networks [55.9260539566555]
Polynomial Networks (PNs) have demonstrated promising performance on face and image recognition recently.
Existing verification algorithms on ReLU neural networks (NNs) based on branch and bound (BaB) techniques cannot be trivially applied to PN verification.
We devise a new bounding method, equipped with BaB for global convergence guarantees, called VPN.
arXiv Detail & Related papers (2022-09-15T11:50:43Z) - FETA: Fairness Enforced Verifying, Training, and Predicting Algorithms
for Neural Networks [9.967054059014691]
We study the problem of verifying, training, and guaranteeing individual fairness of neural network models.
A popular approach for enforcing fairness is to translate a fairness notion into constraints over the parameters of the model.
We develop a counterexample-guided post-processing technique to provably enforce fairness constraints at prediction time.
arXiv Detail & Related papers (2022-06-01T15:06:11Z) - CertiFair: A Framework for Certified Global Fairness of Neural Networks [1.4620086904601473]
Individual Fairness suggests that similar individuals with respect to a certain task are to be treated similarly by a Neural Network (NN) model.
We construct a verifier which checks whether the fairness property holds for a given NN in a classification task.
We then provide provable bounds on the fairness of the resulting NN.
arXiv Detail & Related papers (2022-05-20T02:08:47Z) - Probabilistic Verification of Neural Networks Against Group Fairness [21.158245095699456]
We propose an approach to formally verify neural networks against fairness.
Our method is built upon an approach for learning Markov Chains from a user-provided neural network.
We demonstrate that with our analysis results, the neural weights can be optimized to improve fairness.
arXiv Detail & Related papers (2021-07-18T04:34:31Z) - Fairness via Representation Neutralization [60.90373932844308]
We propose a new mitigation technique, namely, Representation Neutralization for Fairness (RNF)
RNF achieves that fairness by debiasing only the task-specific classification head of DNN models.
Experimental results over several benchmark datasets demonstrate our RNF framework to effectively reduce discrimination of DNN models.
arXiv Detail & Related papers (2021-06-23T22:26:29Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - Fairness Through Robustness: Investigating Robustness Disparity in Deep
Learning [61.93730166203915]
We argue that traditional notions of fairness are not sufficient when the model is vulnerable to adversarial attacks.
We show that measuring robustness bias is a challenging task for DNNs and propose two methods to measure this form of bias.
arXiv Detail & Related papers (2020-06-17T22:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.