Conditional Deep Inverse Rosenblatt Transports
- URL: http://arxiv.org/abs/2106.04170v1
- Date: Tue, 8 Jun 2021 08:23:11 GMT
- Title: Conditional Deep Inverse Rosenblatt Transports
- Authors: Tiangang Cui and Sergey Dolgov and Olivier Zahm
- Abstract summary: We present a novel offline-online method to mitigate the computational burden of the characterization of conditional beliefs in statistical learning.
In the offline phase, it learns the joint law of the belief random variables and the observational random variables in the tensor-train format.
In the online phase, it utilizes the resulting order-preserving conditional transport map to issue real-time characterization of the conditional beliefs given new observed information.
- Score: 2.0625936401496237
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel offline-online method to mitigate the computational burden
of the characterization of conditional beliefs in statistical learning. In the
offline phase, the proposed method learns the joint law of the belief random
variables and the observational random variables in the tensor-train (TT)
format. In the online phase, it utilizes the resulting order-preserving
conditional transport map to issue real-time characterization of the
conditional beliefs given new observed information. Compared with the
state-of-the-art normalizing flows techniques, the proposed method relies on
function approximation and is equipped with thorough performance analysis. This
also allows us to further extend the capability of transport maps in
challenging problems with high-dimensional observations and high-dimensional
belief variables. On the one hand, we present novel heuristics to reorder
and/or reparametrize the variables to enhance the approximation power of TT. On
the other, we integrate the TT-based transport maps and the parameter
reordering/reparametrization into layered compositions to further improve the
performance of the resulting transport maps. We demonstrate the efficiency of
the proposed method on various statistical learning tasks in ordinary
differential equations (ODEs) and partial differential equations (PDEs).
Related papers
- Dynamical Measure Transport and Neural PDE Solvers for Sampling [77.38204731939273]
We tackle the task of sampling from a probability density as transporting a tractable density function to the target.
We employ physics-informed neural networks (PINNs) to approximate the respective partial differential equations (PDEs) solutions.
PINNs allow for simulation- and discretization-free optimization and can be trained very efficiently.
arXiv Detail & Related papers (2024-07-10T17:39:50Z) - State-Free Inference of State-Space Models: The Transfer Function Approach [132.83348321603205]
State-free inference does not incur any significant memory or computational cost with an increase in state size.
We achieve this using properties of the proposed frequency domain transfer function parametrization.
We report improved perplexity in language modeling over a long convolutional Hyena baseline.
arXiv Detail & Related papers (2024-05-10T00:06:02Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Efficient Neural Network Approaches for Conditional Optimal Transport with Applications in Bayesian Inference [1.740133468405535]
We present two neural network approaches that approximate the solutions of static and conditional optimal transport (COT) problems.
We demonstrate both algorithms, comparing them with competing state-the-art approaches, using benchmark datasets and simulation-based inverse problems.
arXiv Detail & Related papers (2023-10-25T20:20:09Z) - A transport approach to sequential simulation-based inference [0.0]
We present a new transport-based approach to efficiently perform sequential Bayesian inference of static model parameters.
The strategy is based on the extraction of conditional distribution from the joint distribution of parameters and data, via the estimation of structured (e.g., block triangular) transport maps.
This allow gradient-based characterizations of posterior density via transport maps in a model-free, online phase.
arXiv Detail & Related papers (2023-08-26T18:53:48Z) - A Geometric Perspective on Diffusion Models [57.27857591493788]
We inspect the ODE-based sampling of a popular variance-exploding SDE.
We establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm.
arXiv Detail & Related papers (2023-05-31T15:33:16Z) - On Hypothesis Transfer Learning of Functional Linear Models [8.557392136621894]
We study the transfer learning (TL) for the functional linear regression (FLR) under the Reproducing Kernel Space (RKHS) framework.
We measure the similarity across tasks using RKHS distance, allowing the type of information being transferred tied to the properties of the imposed RKHS.
Two algorithms are proposed: one conducts the transfer when positive sources are known, while the other leverages aggregation to achieve robust transfer without prior information about the sources.
arXiv Detail & Related papers (2022-06-09T04:50:16Z) - Comparing Probability Distributions with Conditional Transport [63.11403041984197]
We propose conditional transport (CT) as a new divergence and approximate it with the amortized CT (ACT) cost.
ACT amortizes the computation of its conditional transport plans and comes with unbiased sample gradients that are straightforward to compute.
On a wide variety of benchmark datasets generative modeling, substituting the default statistical distance of an existing generative adversarial network with ACT is shown to consistently improve the performance.
arXiv Detail & Related papers (2020-12-28T05:14:22Z) - Deep composition of tensor-trains using squared inverse Rosenblatt
transports [0.6091702876917279]
This paper generalises the functional tensor-train approximation of the inverse Rosenblatt transport.
We develop an efficient procedure to compute this transport from a squared tensor-train decomposition.
The resulting deep inverse Rosenblatt transport significantly expands the capability of tensor approximations and transport maps to random variables.
arXiv Detail & Related papers (2020-07-14T11:04:18Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.