Certification of Distributional Individual Fairness
- URL: http://arxiv.org/abs/2311.11911v1
- Date: Mon, 20 Nov 2023 16:41:54 GMT
- Title: Certification of Distributional Individual Fairness
- Authors: Matthew Wicker, Vihari Piratia, and Adrian Weller
- Abstract summary: We provide certificates for individual fairness (IF) of neural networks.
We show that our method allows us to certify neural networks that are several dozen larger than those considered by prior works.
- Score: 41.65399122566472
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Providing formal guarantees of algorithmic fairness is of paramount
importance to socially responsible deployment of machine learning algorithms.
In this work, we study formal guarantees, i.e., certificates, for individual
fairness (IF) of neural networks. We start by introducing a novel convex
approximation of IF constraints that exponentially decreases the computational
cost of providing formal guarantees of local individual fairness. We highlight
that prior methods are constrained by their focus on global IF certification
and can therefore only scale to models with a few dozen hidden neurons, thus
limiting their practical impact. We propose to certify distributional
individual fairness which ensures that for a given empirical distribution and
all distributions within a $\gamma$-Wasserstein ball, the neural network has
guaranteed individually fair predictions. Leveraging developments in
quasi-convex optimization, we provide novel and efficient certified bounds on
distributional individual fairness and show that our method allows us to
certify and regularize neural networks that are several orders of magnitude
larger than those considered by prior works. Moreover, we study real-world
distribution shifts and find our bounds to be a scalable, practical, and sound
source of IF guarantees.
Related papers
- FairQuant: Certifying and Quantifying Fairness of Deep Neural Networks [6.22084835644296]
We propose a method for formally certifying and quantifying individual fairness of deep neural networks (DNN)
Individual fairness guarantees that any two individuals who are identical except for a legally protected attribute (e.g., gender or race) receive the same treatment.
We have implemented our method and evaluated it on four popular fairness research datasets.
arXiv Detail & Related papers (2024-09-05T03:36:05Z) - Distribution-Free Fair Federated Learning with Small Samples [54.63321245634712]
FedFaiREE is a post-processing algorithm developed specifically for distribution-free fair learning in decentralized settings with small samples.
We provide rigorous theoretical guarantees for both fairness and accuracy, and our experimental results further provide robust empirical validation for our proposed method.
arXiv Detail & Related papers (2024-02-25T17:37:53Z) - f-FERM: A Scalable Framework for Robust Fair Empirical Risk Minimization [9.591164070876689]
This paper presents a unified optimization framework for fair empirical risk based on f-divergence measures (f-FERM)
In addition, our experiments demonstrate the superiority of fairness-accuracy tradeoffs offered by f-FERM for almost all batch sizes.
Our extension is based on a distributionally robust optimization reformulation of f-FERM objective under $L_p$ norms as uncertainty sets.
arXiv Detail & Related papers (2023-12-06T03:14:16Z) - Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk
Minimization Framework [12.734559823650887]
In the presence of distribution shifts, fair machine learning models may behave unfairly on test data.
Existing algorithms require full access to data and cannot be used when small batches are used.
This paper proposes the first distributionally robust fairness framework with convergence guarantees that do not require knowledge of the causal graph.
arXiv Detail & Related papers (2023-09-20T23:25:28Z) - Certifying Some Distributional Fairness with Subpopulation Decomposition [20.009388617013986]
We first formulate the certified fairness of an ML model trained on a given data distribution as an optimization problem.
We then propose a general fairness certification framework and instantiate it for both sensitive shifting and general shifting scenarios.
Our framework is flexible to integrate additional non-skewness constraints and we show that it provides even tighter certification under different real-world scenarios.
arXiv Detail & Related papers (2022-05-31T01:17:50Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - Data Dependent Randomized Smoothing [127.34833801660233]
We show that our data dependent framework can be seamlessly incorporated into 3 randomized smoothing approaches.
We get 9% and 6% improvement over the certified accuracy of the strongest baseline for a radius of 0.5 on CIFAR10 and ImageNet.
arXiv Detail & Related papers (2020-12-08T10:53:11Z) - Fair Densities via Boosting the Sufficient Statistics of Exponential
Families [72.34223801798422]
We introduce a boosting algorithm to pre-process data for fairness.
Our approach shifts towards better data fitting while still ensuring a minimal fairness guarantee.
Empirical results are present to display the quality of result on real-world data.
arXiv Detail & Related papers (2020-12-01T00:49:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.