Many-body Approximation for Non-negative Tensors
- URL: http://arxiv.org/abs/2209.15338v3
- Date: Mon, 30 Oct 2023 12:43:27 GMT
- Title: Many-body Approximation for Non-negative Tensors
- Authors: Kazu Ghalamkari, Mahito Sugiyama, Yoshinobu Kawahara
- Abstract summary: We present an alternative approach to decompose non-negative tensors, called many-body approximation.
Traditional decomposition methods assume low-rankness in the representation, resulting in difficulties in global optimization and target rank selection.
- Score: 17.336552862741133
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an alternative approach to decompose non-negative tensors, called
many-body approximation. Traditional decomposition methods assume low-rankness
in the representation, resulting in difficulties in global optimization and
target rank selection. We avoid these problems by energy-based modeling of
tensors, where a tensor and its mode correspond to a probability distribution
and a random variable, respectively. Our model can be globally optimized in
terms of the KL divergence minimization by taking the interaction between
variables (that is, modes), into account that can be tuned more intuitively
than ranks. Furthermore, we visualize interactions between modes as tensor
networks and reveal a nontrivial relationship between many-body approximation
and low-rank approximation. We demonstrate the effectiveness of our approach in
tensor completion and approximation.
Related papers
- Non-negative Tensor Mixture Learning for Discrete Density Estimation [3.9633191508712398]
We present an expectation-maximization based framework for non-negative tensor decomposition.
We exploit that the closed-form solution of the many-body approximation can be used to update all parameters simultaneously in the M-step.
arXiv Detail & Related papers (2024-05-28T14:28:28Z) - TERM Model: Tensor Ring Mixture Model for Density Estimation [48.622060998018206]
In this paper, we take tensor ring decomposition for density estimator, which significantly reduces the number of permutation candidates.
A mixture model that incorporates multiple permutation candidates with adaptive weights is further designed, resulting in increased expressive flexibility.
This approach acknowledges that suboptimal permutations can offer distinctive information besides that of optimal permutations.
arXiv Detail & Related papers (2023-12-13T11:39:56Z) - Differentiating Metropolis-Hastings to Optimize Intractable Densities [51.16801956665228]
We develop an algorithm for automatic differentiation of Metropolis-Hastings samplers.
We apply gradient-based optimization to objectives expressed as expectations over intractable target densities.
arXiv Detail & Related papers (2023-06-13T17:56:02Z) - A Nested Matrix-Tensor Model for Noisy Multi-view Clustering [5.132856740094742]
We propose a nested matrix-tensor model which extends the spiked rank-one tensor model of order three.
We show that our theoretical results allow us to anticipate the exact accuracy of the proposed clustering approach.
Our analysis unveils unexpected and non-trivial phase transition phenomena depending on the model parameters.
arXiv Detail & Related papers (2023-05-31T16:13:46Z) - Error Analysis of Tensor-Train Cross Approximation [88.83467216606778]
We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
arXiv Detail & Related papers (2022-07-09T19:33:59Z) - Optimal variance-reduced stochastic approximation in Banach spaces [114.8734960258221]
We study the problem of estimating the fixed point of a contractive operator defined on a separable Banach space.
We establish non-asymptotic bounds for both the operator defect and the estimation error.
arXiv Detail & Related papers (2022-01-21T02:46:57Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z) - Mean-Field Approximation to Gaussian-Softmax Integral with Application
to Uncertainty Estimation [23.38076756988258]
We propose a new single-model based approach to quantify uncertainty in deep neural networks.
We use a mean-field approximation formula to compute an analytically intractable integral.
Empirically, the proposed approach performs competitively when compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-06-13T07:32:38Z) - Enhanced nonconvex low-rank approximation of tensor multi-modes for
tensor completion [1.3406858660972554]
We propose a novel low-rank approximation tensor multi-modes (LRATM)
A block-bound method-based algorithm is designed to efficiently solve the proposed model.
Numerical results on three types of public multi-dimensional datasets have tested and shown that our algorithm can recover a variety of low-rank tensors.
arXiv Detail & Related papers (2020-05-28T08:53:54Z) - Multi-View Spectral Clustering Tailored Tensor Low-Rank Representation [105.33409035876691]
This paper explores the problem of multi-view spectral clustering (MVSC) based on tensor low-rank modeling.
We design a novel structured tensor low-rank norm tailored to MVSC.
We show that the proposed method outperforms state-of-the-art methods to a significant extent.
arXiv Detail & Related papers (2020-04-30T11:52:12Z) - Partially Observed Dynamic Tensor Response Regression [17.930417764563106]
In modern data science, dynamic tensor data is prevailing in numerous applications.
We develop a regression model with partially observed dynamic tensor sparsity as a predictor.
We illustrate the efficacy of our proposed method using simulations, and two real applications.
arXiv Detail & Related papers (2020-02-22T17:14:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.