Bayesian Inference with Nonlinear Generative Models: Comments on Secure
Learning
- URL: http://arxiv.org/abs/2201.09986v1
- Date: Wed, 19 Jan 2022 08:29:53 GMT
- Title: Bayesian Inference with Nonlinear Generative Models: Comments on Secure
Learning
- Authors: Ali Bereyhi and Bruno Loureiro and Florent Krzakala and Ralf R.
M\"uller and Hermann Schulz-Baldes
- Abstract summary: This work aims to bring attention to nonlinear generative models and their secrecy potential.
We invoke the replica method to derive normalized cross entropy in an inverse probability problem.
We propose a new secure coding scheme which achieves the secrecy capacity of the wiretap channel.
- Score: 29.818395770651865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unlike the classical linear model, nonlinear generative models have been
addressed sparsely in the literature. This work aims to bring attention to
these models and their secrecy potential. To this end, we invoke the replica
method to derive the asymptotic normalized cross entropy in an inverse
probability problem whose generative model is described by a Gaussian random
field with a generic covariance function. Our derivations further demonstrate
the asymptotic statistical decoupling of Bayesian inference algorithms and
specify the decoupled setting for a given nonlinear model.
The replica solution depicts that strictly nonlinear models establish an
all-or-nothing phase transition: There exists a critical load at which the
optimal Bayesian inference changes from being perfect to an uncorrelated
learning. This finding leads to design of a new secure coding scheme which
achieves the secrecy capacity of the wiretap channel. The proposed coding has a
significantly smaller codebook size compared to the random coding scheme of
Wyner. This interesting result implies that strictly nonlinear generative
models are perfectly secured without any secure coding. We justify this latter
statement through the analysis of an illustrative model for perfectly secure
and reliable inference.
Related papers
- Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - DiffHybrid-UQ: Uncertainty Quantification for Differentiable Hybrid
Neural Modeling [4.76185521514135]
We introduce a novel method, DiffHybrid-UQ, for effective and efficient uncertainty propagation and estimation in hybrid neural differentiable models.
Specifically, our approach effectively discerns and quantifies both aleatoric uncertainties, arising from data noise, and epistemic uncertainties, resulting from model-form discrepancies and data sparsity.
arXiv Detail & Related papers (2023-12-30T07:40:47Z) - A Metalearned Neural Circuit for Nonparametric Bayesian Inference [4.767884267554628]
Most applications of machine learning to classification assume a closed set of balanced classes.
This is at odds with the real world, where class occurrence statistics often follow a long-tailed power-law distribution.
We present a method for extracting the inductive bias from a nonparametric Bayesian model and transferring it to an artificial neural network.
arXiv Detail & Related papers (2023-11-24T16:43:17Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - Posterior Collapse and Latent Variable Non-identifiability [54.842098835445]
We propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility.
Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
arXiv Detail & Related papers (2023-01-02T06:16:56Z) - Towards a Unified Framework for Uncertainty-aware Nonlinear Variable
Selection with Theoretical Guarantees [2.1506382989223782]
We develop a simple and unified framework for nonlinear variable selection that incorporates model uncertainty.
We show that the approach is generalizable even to non-differentiable models such as tree ensembles.
arXiv Detail & Related papers (2022-04-15T02:12:00Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - Combining Gaussian processes and polynomial chaos expansions for
stochastic nonlinear model predictive control [0.0]
We introduce a new algorithm to explicitly consider time-invariant uncertainties in optimal control problems.
The main novelty in this paper is to use this combination in an efficient fashion to obtain mean and variance estimates of nonlinear transformations.
It is shown how to formulate both chance-constraints and a probabilistic objective for the optimal control problem.
arXiv Detail & Related papers (2021-03-09T14:25:08Z) - Hessian Eigenspectra of More Realistic Nonlinear Models [73.31363313577941]
We make a emphprecise characterization of the Hessian eigenspectra for a broad family of nonlinear models.
Our analysis takes a step forward to identify the origin of many striking features observed in more complex machine learning models.
arXiv Detail & Related papers (2021-03-02T06:59:52Z) - Identification of Probability weighted ARX models with arbitrary domains [75.91002178647165]
PieceWise Affine models guarantees universal approximation, local linearity and equivalence to other classes of hybrid system.
In this work, we focus on the identification of PieceWise Auto Regressive with eXogenous input models with arbitrary regions (NPWARX)
The architecture is conceived following the Mixture of Expert concept, developed within the machine learning field.
arXiv Detail & Related papers (2020-09-29T12:50:33Z) - Uncertainty Modelling in Risk-averse Supply Chain Systems Using
Multi-objective Pareto Optimization [0.0]
One of the arduous tasks in supply chain modelling is to build robust models against irregular variations.
We have introduced a novel methodology namely, Pareto Optimization to handle uncertainties and bound the entropy of such uncertainties by explicitly modelling them under some apriori assumptions.
arXiv Detail & Related papers (2020-04-24T21:04:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.