Deep composition of tensor-trains using squared inverse Rosenblatt
transports
- URL: http://arxiv.org/abs/2007.06968v3
- Date: Tue, 19 Oct 2021 02:51:18 GMT
- Title: Deep composition of tensor-trains using squared inverse Rosenblatt
transports
- Authors: Tiangang Cui and Sergey Dolgov
- Abstract summary: This paper generalises the functional tensor-train approximation of the inverse Rosenblatt transport.
We develop an efficient procedure to compute this transport from a squared tensor-train decomposition.
The resulting deep inverse Rosenblatt transport significantly expands the capability of tensor approximations and transport maps to random variables.
- Score: 0.6091702876917279
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Characterising intractable high-dimensional random variables is one of the
fundamental challenges in stochastic computation. The recent surge of transport
maps offers a mathematical foundation and new insights for tackling this
challenge by coupling intractable random variables with tractable reference
random variables. This paper generalises the functional tensor-train
approximation of the inverse Rosenblatt transport recently developed by Dolgov
et al. (Stat Comput 30:603--625, 2020) to a wide class of high-dimensional
non-negative functions, such as unnormalised probability density functions.
First, we extend the inverse Rosenblatt transform to enable the transport to
general reference measures other than the uniform measure. We develop an
efficient procedure to compute this transport from a squared tensor-train
decomposition which preserves the monotonicity. More crucially, we integrate
the proposed order-preserving functional tensor-train transport into a nested
variable transformation framework inspired by the layered structure of deep
neural networks. The resulting deep inverse Rosenblatt transport significantly
expands the capability of tensor approximations and transport maps to random
variables with complicated nonlinear interactions and concentrated density
functions. We demonstrate the efficiency of the proposed approach on a range of
applications in statistical learning and uncertainty quantification, including
parameter estimation for dynamical systems and inverse problems constrained by
partial differential equations.
Related papers
- RLE: A Unified Perspective of Data Augmentation for Cross-Spectral Re-identification [59.5042031913258]
Non-linear modality discrepancy mainly comes from diverse linear transformations acting on the surface of different materials.
We propose a Random Linear Enhancement (RLE) strategy which includes Moderate Random Linear Enhancement (MRLE) and Radical Random Linear Enhancement (RRLE)
The experimental results not only demonstrate the superiority and effectiveness of RLE but also confirm its great potential as a general-purpose data augmentation for cross-spectral re-identification.
arXiv Detail & Related papers (2024-11-02T12:13:37Z) - Probing hydrodynamic crossovers with dissipation-assisted operator evolution [0.0]
We chart the emergence of diffusion in a generic interacting lattice model for varying U(1) charge densities.
Our results clarify the dominant contributions to hydrodynamic correlation functions of conserved densities, and serve as a guide for generalizations to low temperature transport.
arXiv Detail & Related papers (2024-08-15T16:39:10Z) - Variance-Reducing Couplings for Random Features [57.73648780299374]
Random features (RFs) are a popular technique to scale up kernel methods in machine learning.
We find couplings to improve RFs defined on both Euclidean and discrete input spaces.
We reach surprising conclusions about the benefits and limitations of variance reduction as a paradigm.
arXiv Detail & Related papers (2024-05-26T12:25:09Z) - TERM Model: Tensor Ring Mixture Model for Density Estimation [48.622060998018206]
In this paper, we take tensor ring decomposition for density estimator, which significantly reduces the number of permutation candidates.
A mixture model that incorporates multiple permutation candidates with adaptive weights is further designed, resulting in increased expressive flexibility.
This approach acknowledges that suboptimal permutations can offer distinctive information besides that of optimal permutations.
arXiv Detail & Related papers (2023-12-13T11:39:56Z) - Arbitrary Distributions Mapping via SyMOT-Flow: A Flow-based Approach Integrating Maximum Mean Discrepancy and Optimal Transport [2.7309692684728617]
We introduce a novel model called SyMOT-Flow that trains an invertible transformation by minimizing the symmetric maximum mean discrepancy between samples from two unknown distributions.
The resulting transformation leads to more stable and accurate sample generation.
arXiv Detail & Related papers (2023-08-26T08:39:16Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - Theory on variational high-dimensional tensor networks [2.0307382542339485]
We investigate the emergent statistical properties of random high-dimensional-network states and the trainability of tensoral networks.
We prove that variational high-dimensional networks suffer from barren plateaus for global loss functions.
Our results pave a way for their future theoretical studies and practical applications.
arXiv Detail & Related papers (2023-03-30T15:26:30Z) - Instance-Dependent Generalization Bounds via Optimal Transport [51.71650746285469]
Existing generalization bounds fail to explain crucial factors that drive the generalization of modern neural networks.
We derive instance-dependent generalization bounds that depend on the local Lipschitz regularity of the learned prediction function in the data space.
We empirically analyze our generalization bounds for neural networks, showing that the bound values are meaningful and capture the effect of popular regularization methods during training.
arXiv Detail & Related papers (2022-11-02T16:39:42Z) - Deep importance sampling using tensor trains with application to a
priori and a posteriori rare event estimation [2.4815579733050153]
We propose a deep importance sampling method that is suitable for estimating rare event probabilities in high-dimensional problems.
We approximate the optimal importance distribution in a general importance sampling problem as the pushforward of a reference distribution under a composition of order-preserving transformations.
The squared tensor-train decomposition provides a scalable ansatz for building order-preserving high-dimensional transformations via density approximations.
arXiv Detail & Related papers (2022-09-05T12:44:32Z) - Conditional Deep Inverse Rosenblatt Transports [2.0625936401496237]
We present a novel offline-online method to mitigate the computational burden of the characterization of conditional beliefs in statistical learning.
In the offline phase, it learns the joint law of the belief random variables and the observational random variables in the tensor-train format.
In the online phase, it utilizes the resulting order-preserving conditional transport map to issue real-time characterization of the conditional beliefs given new observed information.
arXiv Detail & Related papers (2021-06-08T08:23:11Z) - Variational Transport: A Convergent Particle-BasedAlgorithm for Distributional Optimization [106.70006655990176]
A distributional optimization problem arises widely in machine learning and statistics.
We propose a novel particle-based algorithm, dubbed as variational transport, which approximately performs Wasserstein gradient descent.
We prove that when the objective function satisfies a functional version of the Polyak-Lojasiewicz (PL) (Polyak, 1963) and smoothness conditions, variational transport converges linearly.
arXiv Detail & Related papers (2020-12-21T18:33:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.