Contributions to Large Scale Bayesian Inference and Adversarial Machine
Learning
- URL: http://arxiv.org/abs/2109.13232v1
- Date: Sat, 25 Sep 2021 23:02:47 GMT
- Title: Contributions to Large Scale Bayesian Inference and Adversarial Machine
Learning
- Authors: V\'ictor Gallego
- Abstract summary: The rampant adoption of ML methodologies has revealed that models are usually adopted to make decisions without taking into account the uncertainties in their predictions.
We believe that developing ML systems that take into predictive account uncertainties and are robust against adversarial examples is a must for real-world tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rampant adoption of ML methodologies has revealed that models are usually
adopted to make decisions without taking into account the uncertainties in
their predictions. More critically, they can be vulnerable to adversarial
examples. Thus, we believe that developing ML systems that take into account
predictive uncertainties and are robust against adversarial examples is a must
for critical, real-world tasks. We start with a case study in retailing. We
propose a robust implementation of the Nerlove-Arrow model using a Bayesian
structural time series model. Its Bayesian nature facilitates incorporating
prior information reflecting the manager's views, which can be updated with
relevant data. However, this case adopted classical Bayesian techniques, such
as the Gibbs sampler. Nowadays, the ML landscape is pervaded with neural
networks and this chapter also surveys current developments in this sub-field.
Then, we tackle the problem of scaling Bayesian inference to complex models and
large data regimes. In the first part, we propose a unifying view of two
different Bayesian inference algorithms, Stochastic Gradient Markov Chain Monte
Carlo (SG-MCMC) and Stein Variational Gradient Descent (SVGD), leading to
improved and efficient novel sampling schemes. In the second part, we develop a
framework to boost the efficiency of Bayesian inference in probabilistic models
by embedding a Markov chain sampler within a variational posterior
approximation. After that, we present an alternative perspective on adversarial
classification based on adversarial risk analysis, and leveraging the scalable
Bayesian approaches from chapter 2. In chapter 4 we turn to reinforcement
learning, introducing Threatened Markov Decision Processes, showing the
benefits of accounting for adversaries in RL while the agent learns.
Related papers
- Indiscriminate Disruption of Conditional Inference on Multivariate Gaussians [60.22542847840578]
Despite advances in adversarial machine learning, inference for Gaussian models in the presence of an adversary is notably understudied.
We consider a self-interested attacker who wishes to disrupt a decisionmaker's conditional inference and subsequent actions by corrupting a set of evidentiary variables.
To avoid detection, the attacker also desires the attack to appear plausible wherein plausibility is determined by the density of the corrupted evidence.
arXiv Detail & Related papers (2024-11-21T17:46:55Z) - Variational Bayesian Bow tie Neural Networks with Shrinkage [0.276240219662896]
We build a relaxed version of the standard feed-forward rectified neural network.
We employ Polya-Gamma data augmentation tricks to render a conditionally linear and Gaussian model.
We derive a variational inference algorithm that avoids distributional assumptions and independence across layers.
arXiv Detail & Related papers (2024-11-17T17:36:30Z) - A Bayesian Approach to Data Point Selection [24.98069363998565]
Data point selection (DPS) is becoming a critical topic in deep learning.
Existing approaches to DPS are predominantly based on a bi-level optimisation (BLO) formulation.
We propose a novel Bayesian approach to DPS.
arXiv Detail & Related papers (2024-11-06T09:04:13Z) - Do Bayesian Variational Autoencoders Know What They Don't Know? [0.6091702876917279]
The problem of detecting the Out-of-Distribution (OoD) inputs is paramount importance for Deep Neural Networks.
It has been previously shown that even Deep Generative Models that allow estimating the density of the inputs may not be reliable.
This paper investigates three approaches to inference: Markov chain Monte Carlo, Bayes gradient by Backpropagation and Weight Averaging-Gaussian.
arXiv Detail & Related papers (2022-12-29T11:48:01Z) - BayesCap: Bayesian Identity Cap for Calibrated Uncertainty in Frozen
Neural Networks [50.15201777970128]
We propose BayesCap that learns a Bayesian identity mapping for the frozen model, allowing uncertainty estimation.
BayesCap is a memory-efficient method that can be trained on a small fraction of the original dataset.
We show the efficacy of our method on a wide variety of tasks with a diverse set of architectures.
arXiv Detail & Related papers (2022-07-14T12:50:09Z) - Rethinking Bayesian Learning for Data Analysis: The Art of Prior and
Inference in Sparsity-Aware Modeling [20.296566563098057]
Sparse modeling for signal processing and machine learning has been at the focus of scientific research for over two decades.
This article reviews some recent advances in incorporating sparsity-promoting priors into three popular data modeling tools.
arXiv Detail & Related papers (2022-05-28T00:43:52Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - $\beta$-Cores: Robust Large-Scale Bayesian Data Summarization in the
Presence of Outliers [14.918826474979587]
The quality of classic Bayesian inference depends critically on whether observations conform with the assumed data generating model.
We propose a variational inference method that, in a principled way, can simultaneously scale to large datasets.
We illustrate the applicability of our approach in diverse simulated and real datasets, and various statistical models.
arXiv Detail & Related papers (2020-08-31T13:47:12Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Bayesian Deep Learning and a Probabilistic Perspective of Generalization [56.69671152009899]
We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization.
We also propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction.
arXiv Detail & Related papers (2020-02-20T15:13:27Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.