Individual Fairness in Bayesian Neural Networks
- URL: http://arxiv.org/abs/2304.10828v1
- Date: Fri, 21 Apr 2023 09:12:14 GMT
- Title: Individual Fairness in Bayesian Neural Networks
- Authors: Alice Doherty, Matthew Wicker, Luca Laurenti, Andrea Patane
- Abstract summary: We study Individual Fairness (IF) for Bayesian neural networks (BNNs)
We use bounds on statistical sampling over the input space and the relationship between adversarial and individual fairness to derive a framework for the robustness estimation of $epsilon$-$delta$-IF.
We find that BNNs trained by means of approximate Bayesian inference consistently tend to be markedly more individually fair than their deterministic counterparts.
- Score: 9.386341375741225
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study Individual Fairness (IF) for Bayesian neural networks (BNNs).
Specifically, we consider the $\epsilon$-$\delta$-individual fairness notion,
which requires that, for any pair of input points that are $\epsilon$-similar
according to a given similarity metrics, the output of the BNN is within a
given tolerance $\delta>0.$ We leverage bounds on statistical sampling over the
input space and the relationship between adversarial robustness and individual
fairness to derive a framework for the systematic estimation of
$\epsilon$-$\delta$-IF, designing Fair-FGSM and Fair-PGD as
global,fairness-aware extensions to gradient-based attacks for BNNs. We
empirically study IF of a variety of approximately inferred BNNs with different
architectures on fairness benchmarks, and compare against deterministic models
learnt using frequentist techniques. Interestingly, we find that BNNs trained
by means of approximate Bayesian inference consistently tend to be markedly
more individually fair than their deterministic counterparts.
Related papers
- Adversarial Robustness Certification for Bayesian Neural Networks [22.71265211510824]
We study the problem of robustness certifying the computation of Bayesian neural networks (BNNs) to adversarial input perturbations.
Our framework is based on weight sampling, integration, and bound propagation techniques, and can be applied to BNNs with a large number of parameters.
arXiv Detail & Related papers (2023-06-23T16:58:25Z) - Constraining cosmological parameters from N-body simulations with
Variational Bayesian Neural Networks [0.0]
Multiplicative normalizing flows (MNFs) are a family of approximate posteriors for the parameters of BNNs.
We have compared MNFs with respect to the standard BNNs, and the flipout estimator.
MNFs provide more realistic predictive distribution closer to the true posterior mitigating the bias introduced by the variational approximation.
arXiv Detail & Related papers (2023-01-09T16:07:48Z) - Explicit Tradeoffs between Adversarial and Natural Distributional
Robustness [48.44639585732391]
In practice, models need to enjoy both types of robustness to ensure reliability.
In this work, we show that in fact, explicit tradeoffs exist between adversarial and natural distributional robustness.
arXiv Detail & Related papers (2022-09-15T19:58:01Z) - Individual Fairness Guarantees for Neural Networks [0.0]
We consider the problem of certifying the individual fairness (IF) of feed-forward neural networks (NNs)
We work with the $epsilon$-$delta$-IF formulation, which requires that the output difference between any pair of $epsilon$-similar individuals is bounded by a maximum decision tolerance.
We show how this formulation can be used to encourage models' fairness at training time by modifying the NN loss, and empirically confirm our approach yields NNs that are orders of magnitude fairer than state-of-the-art methods.
arXiv Detail & Related papers (2022-05-11T20:21:07Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Model Architecture Adaption for Bayesian Neural Networks [9.978961706999833]
We show a novel network architecture search (NAS) that optimize BNNs for both accuracy and uncertainty.
In our experiments, the searched models show comparable uncertainty ability and accuracy compared to the state-of-the-art (deep ensemble)
arXiv Detail & Related papers (2022-02-09T10:58:50Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - Robustness Certificates for Implicit Neural Networks: A Mixed Monotone
Contractive Approach [60.67748036747221]
Implicit neural networks offer competitive performance and reduced memory consumption.
They can remain brittle with respect to input adversarial perturbations.
This paper proposes a theoretical and computational framework for robustness verification of implicit neural networks.
arXiv Detail & Related papers (2021-12-10T03:08:55Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Fairness Through Robustness: Investigating Robustness Disparity in Deep
Learning [61.93730166203915]
We argue that traditional notions of fairness are not sufficient when the model is vulnerable to adversarial attacks.
We show that measuring robustness bias is a challenging task for DNNs and propose two methods to measure this form of bias.
arXiv Detail & Related papers (2020-06-17T22:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.