Self-reinforced polynomial approximation methods for concentrated
probability densities
- URL: http://arxiv.org/abs/2303.02554v1
- Date: Sun, 5 Mar 2023 02:44:02 GMT
- Title: Self-reinforced polynomial approximation methods for concentrated
probability densities
- Authors: Tiangang Cui and Sergey Dolgov and Olivier Zahm
- Abstract summary: Transport map methods offer a powerful statistical learning tool that can couple a target high-dimensional random variable with some reference random variable.
This paper presents new computational techniques for building the Knothe-Rosenblatt (KR) rearrangement based on general separable functions.
- Score: 1.5469452301122175
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transport map methods offer a powerful statistical learning tool that can
couple a target high-dimensional random variable with some reference random
variable using invertible transformations. This paper presents new
computational techniques for building the Knothe--Rosenblatt (KR) rearrangement
based on general separable functions. We first introduce a new construction of
the KR rearrangement -- with guaranteed invertibility in its numerical
implementation -- based on approximating the density of the target random
variable using tensor-product spectral polynomials and downward closed sparse
index sets. Compared to other constructions of KR arrangements based on either
multi-linear approximations or nonlinear optimizations, our new construction
only relies on a weighted least square approximation procedure. Then, inspired
by the recently developed deep tensor trains (Cui and Dolgov, Found. Comput.
Math. 22:1863--1922, 2022), we enhance the approximation power of sparse
polynomials by preconditioning the density approximation problem using
compositions of maps. This is particularly suitable for high-dimensional and
concentrated probability densities commonly seen in many applications. We
approximate the complicated target density by a composition of self-reinforced
KR rearrangements, in which previously constructed KR rearrangements -- based
on the same approximation ansatz -- are used to precondition the density
approximation problem for building each new KR rearrangement. We demonstrate
the efficiency of our proposed methods and the importance of using the
composite map on several inverse problems governed by ordinary differential
equations (ODEs) and partial differential equations (PDEs).
Related papers
- Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Scaling Riemannian Diffusion Models [68.52820280448991]
We show that our method enables us to scale to high dimensional tasks on nontrivial manifold.
We model QCD densities on $SU(n)$ lattices and contrastively learned embeddings on high dimensional hyperspheres.
arXiv Detail & Related papers (2023-10-30T21:27:53Z) - Robust scalable initialization for Bayesian variational inference with
multi-modal Laplace approximations [0.0]
Variational mixtures with full-covariance structures suffer from a quadratic growth due to variational parameters with the number of parameters.
We propose a method for constructing an initial Gaussian model approximation that can be used to warm-start variational inference.
arXiv Detail & Related papers (2023-07-12T19:30:04Z) - Variational sparse inverse Cholesky approximation for latent Gaussian
processes via double Kullback-Leibler minimization [6.012173616364571]
We combine a variational approximation of the posterior with a similar and efficient SIC-restricted Kullback-Leibler-optimal approximation of the prior.
For this setting, our variational approximation can be computed via gradient descent in polylogarithmic time per iteration.
We provide numerical comparisons showing that the proposed double-Kullback-Leibler-optimal Gaussian-process approximation (DKLGP) can sometimes be vastly more accurate for stationary kernels than alternative approaches.
arXiv Detail & Related papers (2023-01-30T21:50:08Z) - Semi-Discrete Normalizing Flows through Differentiable Tessellation [31.474420819149724]
We propose a tessellation-based approach that learns quantization boundaries on a continuous space, complete with exact likelihood evaluations.
This is done through constructing normalizing flows on convex polytopes parameterized through a differentiable Voronoi tessellation.
We show improvements over existing methods across a range of structured data modalities, and find that we can achieve a significant gain from just adding Voronoi mixtures to a baseline model.
arXiv Detail & Related papers (2022-03-14T03:06:31Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Adaptive deep density approximation for Fokker-Planck equations [0.0]
We present a novel deep density approximation strategy based on KRnet (ADDAKR) for solving the steady-state Fokker-Planck equation.
We show that KRnet can efficiently estimate general high-dimensional density functions.
arXiv Detail & Related papers (2021-03-20T13:49:52Z) - Eigenvalue-corrected Natural Gradient Based on a New Approximation [31.1453204659019]
Eigenvalue-corrected Kronecker Factorization (EKFAC) is a proposed method for training deep neural networks (DNNs)
In this work, we combine the ideas of these two methods and propose Trace-restricted Eigenvalue-corrected Kronecker Factorization (TEKFAC)
The proposed method corrects the inexact re-scaling factor under the Kronecker-factored eigenbasis, but also considers the new approximation technique proposed in Gao et al.
arXiv Detail & Related papers (2020-11-27T08:57:29Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.