Generative Modeling via Hierarchical Tensor Sketching
- URL: http://arxiv.org/abs/2304.05305v1
- Date: Tue, 11 Apr 2023 15:55:13 GMT
- Title: Generative Modeling via Hierarchical Tensor Sketching
- Authors: Yifan Peng, Yian Chen, E. Miles Stoudenmire, Yuehaw Khoo
- Abstract summary: We propose a hierarchical tensor-network approach for approximating high-dimensional probability density via empirical distribution.
The complexity of the resulting algorithm scales linearly in the dimension of the high-dimensional density.
- Score: 12.005736675688917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a hierarchical tensor-network approach for approximating
high-dimensional probability density via empirical distribution. This leverages
randomized singular value decomposition (SVD) techniques and involves solving
linear equations for tensor cores in this tensor network. The complexity of the
resulting algorithm scales linearly in the dimension of the high-dimensional
density. An analysis of estimation error demonstrates the effectiveness of this
method through several numerical experiments.
Related papers
- TERM Model: Tensor Ring Mixture Model for Density Estimation [48.622060998018206]
In this paper, we take tensor ring decomposition for density estimator, which significantly reduces the number of permutation candidates.
A mixture model that incorporates multiple permutation candidates with adaptive weights is further designed, resulting in increased expressive flexibility.
This approach acknowledges that suboptimal permutations can offer distinctive information besides that of optimal permutations.
arXiv Detail & Related papers (2023-12-13T11:39:56Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Theory on variational high-dimensional tensor networks [2.0307382542339485]
We investigate the emergent statistical properties of random high-dimensional-network states and the trainability of tensoral networks.
We prove that variational high-dimensional networks suffer from barren plateaus for global loss functions.
Our results pave a way for their future theoretical studies and practical applications.
arXiv Detail & Related papers (2023-03-30T15:26:30Z) - High-dimensional density estimation with tensorizing flow [5.457842083043014]
We propose the tensorizing flow method for estimating high-dimensional probability density functions from the observed data.
The proposed method combines the optimization-less feature of the tensor-train with the flexibility of the flow-based generative models.
arXiv Detail & Related papers (2022-12-01T18:45:45Z) - Moment Estimation for Nonparametric Mixture Models Through Implicit
Tensor Decomposition [7.139680863764187]
We present an alternating least squares type numerical optimization scheme to estimate conditionally-independent mixture models in $mathbbRn$.
We compute the cumulative distribution functions, higher moments and other statistics of the component distributions through linear solves.
Numerical experiments demonstrate the competitive performance of the algorithm, and its applicability to many models and applications.
arXiv Detail & Related papers (2022-10-25T23:31:33Z) - Error Analysis of Tensor-Train Cross Approximation [88.83467216606778]
We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
arXiv Detail & Related papers (2022-07-09T19:33:59Z) - Generative modeling via tensor train sketching [0.0]
We introduce a sketching algorithm for constructing a tensor train representation of a probability density from its samples.
We prove that the tensor cores can be recovered with a sample complexity that scales logarithmically in the dimensionality.
arXiv Detail & Related papers (2022-02-23T21:11:34Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - Low-rank Characteristic Tensor Density Estimation Part I: Foundations [38.05393186002834]
We propose a novel approach that builds upon tensor factorization tools.
In order to circumvent the curse of dimensionality, we introduce a low-rank model of this characteristic tensor.
We demonstrate the very promising performance of the proposed method using several measured datasets.
arXiv Detail & Related papers (2020-08-27T18:06:19Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs [71.26657499537366]
We propose a simple literature-based method for the efficient approximation of gradients in neural ODE models.
We compare it with the reverse dynamic method to train neural ODEs on classification, density estimation, and inference approximation tasks.
arXiv Detail & Related papers (2020-03-11T13:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.