Implicit Visual Bias Mitigation by Posterior Estimate Sharpening of a
Bayesian Neural Network
- URL: http://arxiv.org/abs/2303.16564v3
- Date: Tue, 27 Feb 2024 15:08:57 GMT
- Title: Implicit Visual Bias Mitigation by Posterior Estimate Sharpening of a
Bayesian Neural Network
- Authors: Rebecca S Stone, Nishant Ravikumar, Andrew J Bulpitt, David C Hogg
- Abstract summary: We propose a novel implicit mitigation method using a Bayesian neural network.
Our proposed posterior estimate sharpening procedure encourages the network to focus on core features that do not contribute to high uncertainties.
- Score: 7.488317734152586
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The fairness of a deep neural network is strongly affected by dataset bias
and spurious correlations, both of which are usually present in modern
feature-rich and complex visual datasets. Due to the difficulty and variability
of the task, no single de-biasing method has been universally successful. In
particular, implicit methods not requiring explicit knowledge of bias variables
are especially relevant for real-world applications. We propose a novel
implicit mitigation method using a Bayesian neural network, allowing us to
leverage the relationship between epistemic uncertainties and the presence of
bias or spurious correlations in a sample. Our proposed posterior estimate
sharpening procedure encourages the network to focus on core features that do
not contribute to high uncertainties. Experimental results on three benchmark
datasets demonstrate that Bayesian networks with sharpened posterior estimates
perform comparably to prior existing methods and show potential worthy of
further exploration.
Related papers
- Uncertainty Propagation through Trained Deep Neural Networks Using
Factor Graphs [4.704825771757308]
Uncertainty propagation seeks to estimate aleatoric uncertainty by propagating input uncertainties to network predictions.
Motivated by the complex information flows within deep neural networks, we developed a novel approach by posing uncertainty propagation as a non-linear optimization problem using factor graphs.
arXiv Detail & Related papers (2023-12-10T17:26:27Z) - Implicit Variational Inference for High-Dimensional Posteriors [7.924706533725115]
In variational inference, the benefits of Bayesian models rely on accurately capturing the true posterior distribution.
We propose using neural samplers that specify implicit distributions, which are well-suited for approximating complex multimodal and correlated posteriors.
Our approach introduces novel bounds for approximate inference using implicit distributions by locally linearising the neural sampler.
arXiv Detail & Related papers (2023-10-10T14:06:56Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Toward Robust Uncertainty Estimation with Random Activation Functions [3.0586855806896045]
We propose a novel approach for uncertainty quantification via ensembles, called Random Activation Functions (RAFs) Ensemble.
RAFs Ensemble outperforms state-of-the-art ensemble uncertainty quantification methods on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-02-28T13:17:56Z) - Epistemic Uncertainty-Weighted Loss for Visual Bias Mitigation [6.85474615630103]
We argue the relevance of exploring methods which are completely ignorant of the presence of any bias.
We propose using Bayesian neural networks with a predictive uncertainty-weighted loss function to identify potential bias.
We show the method has potential to mitigate visual bias on a bias benchmark dataset and on a real-world face detection problem.
arXiv Detail & Related papers (2022-04-20T11:01:51Z) - The Interplay Between Implicit Bias and Benign Overfitting in Two-Layer
Linear Networks [51.1848572349154]
neural network models that perfectly fit noisy data can generalize well to unseen test data.
We consider interpolating two-layer linear neural networks trained with gradient flow on the squared loss and derive bounds on the excess risk.
arXiv Detail & Related papers (2021-08-25T22:01:01Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Multivariate Deep Evidential Regression [77.34726150561087]
A new approach with uncertainty-aware neural networks shows promise over traditional deterministic methods.
We discuss three issues with a proposed solution to extract aleatoric and epistemic uncertainties from regression-based neural networks.
arXiv Detail & Related papers (2021-04-13T12:20:18Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z) - Ramifications of Approximate Posterior Inference for Bayesian Deep
Learning in Adversarial and Out-of-Distribution Settings [7.476901945542385]
We show that Bayesian deep learning models on certain occasions marginally outperform conventional neural networks.
Preliminary investigations indicate the potential inherent role of bias due to choices of initialisation, architecture or activation functions.
arXiv Detail & Related papers (2020-09-03T16:58:15Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.