Chebyshev approximation and composition of functions in matrix product states for quantum-inspired numerical analysis
- URL: http://arxiv.org/abs/2407.09609v1
- Date: Fri, 12 Jul 2024 18:00:06 GMT
- Title: Chebyshev approximation and composition of functions in matrix product states for quantum-inspired numerical analysis
- Authors: Juan José Rodríguez-Aldavero, Paula García-Molina, Luca Tagliacozzo, Juan José García-Ripoll,
- Abstract summary: It proposes an algorithm that employs iterative Chebyshev expansions and Clenshaw evaluations to represent analytic and highly differentiable functions as MPS Chebyshev interpolants.
It demonstrates rapid convergence for highly-differentiable functions, aligning with theoretical predictions, and generalizes efficiently to multidimensional scenarios.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work explores the representation of univariate and multivariate functions as matrix product states (MPS), also known as quantized tensor-trains (QTT). It proposes an algorithm that employs iterative Chebyshev expansions and Clenshaw evaluations to represent analytic and highly differentiable functions as MPS Chebyshev interpolants. It demonstrates rapid convergence for highly-differentiable functions, aligning with theoretical predictions, and generalizes efficiently to multidimensional scenarios. The performance of the algorithm is compared with that of tensor cross-interpolation (TCI) and multiscale interpolative constructions through a comprehensive comparative study. When function evaluation is inexpensive or when the function is not analytical, TCI is generally more efficient for function loading. However, the proposed method shows competitive performance, outperforming TCI in certain multivariate scenarios. Moreover, it shows advantageous scaling rates and generalizes to a wider range of tasks by providing a framework for function composition in MPS, which is useful for non-linear problems and many-body statistical physics.
Related papers
- Low-Rank Multitask Learning based on Tensorized SVMs and LSSVMs [65.42104819071444]
Multitask learning (MTL) leverages task-relatedness to enhance performance.
We employ high-order tensors, with each mode corresponding to a task index, to naturally represent tasks referenced by multiple indices.
We propose a general framework of low-rank MTL methods with tensorized support vector machines (SVMs) and least square support vector machines (LSSVMs)
arXiv Detail & Related papers (2023-08-30T14:28:26Z) - On the Convergence of Coordinate Ascent Variational Inference [11.166959724276337]
We consider the common coordinate ascent variational inference (CAVI) algorithm for implementing the mean-field (MF) VI.
We provide general conditions for certifying global or local exponential convergence of CAVI.
New notion of generalized correlation for characterizing the interaction between the constituting blocks in influencing the VI objective functional is introduced.
arXiv Detail & Related papers (2023-06-01T20:19:30Z) - Quantics Tensor Cross Interpolation for High-Resolution, Parsimonious Representations of Multivariate Functions in Physics and Beyond [0.0]
We present a strategy, quantics TCI (QTCI), which combines the advantages of both schemes.
We illustrate its potential with an application from condensed matter physics.
arXiv Detail & Related papers (2023-03-21T13:02:58Z) - Tensorized LSSVMs for Multitask Regression [48.844191210894245]
Multitask learning (MTL) can utilize the relatedness between multiple tasks for performance improvement.
New MTL is proposed by leveraging low-rank tensor analysis and Least Squares Support Vectorized Least Squares Support Vectorized tLSSVM-MTL.
arXiv Detail & Related papers (2023-03-04T16:36:03Z) - Offline Reinforcement Learning with Differentiable Function
Approximation is Provably Efficient [65.08966446962845]
offline reinforcement learning, which aims at optimizing decision-making strategies with historical data, has been extensively applied in real-life applications.
We take a step by considering offline reinforcement learning with differentiable function class approximation (DFA)
Most importantly, we show offline differentiable function approximation is provably efficient by analyzing the pessimistic fitted Q-learning algorithm.
arXiv Detail & Related papers (2022-10-03T07:59:42Z) - Provable General Function Class Representation Learning in Multitask
Bandits and MDPs [58.624124220900306]
multitask representation learning is a popular approach in reinforcement learning to boost the sample efficiency.
In this work, we extend the analysis to general function class representations.
We theoretically validate the benefit of multitask representation learning within general function class for bandits and linear MDP.
arXiv Detail & Related papers (2022-05-31T11:36:42Z) - Finite-Sum Coupled Compositional Stochastic Optimization: Theory and
Applications [43.48388050033774]
This paper provides a comprehensive analysis of a simple algorithm for both non- and convex objectives.
Our analysis also exhibits new insights for improving the practical implementation by sampling batches of equal size for the outer and inner levels.
arXiv Detail & Related papers (2022-02-24T22:39:35Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - A Forward Backward Greedy approach for Sparse Multiscale Learning [0.0]
We propose a feature driven Reproducing Kernel Hilbert space (RKHS) for which the associated kernel has a weighted multiscale structure.
For generating approximations in this space, we provide a practical forward-backward algorithm that is shown to greedily construct a set of basis functions having a multiscale structure.
We analyze the performance of the approach on a variety of simulation and real data sets.
arXiv Detail & Related papers (2021-02-14T04:22:52Z) - Columnwise Element Selection for Computationally Efficient Nonnegative
Coupled Matrix Tensor Factorization [16.466065626950424]
Nonnegative CMTF (N-CMTF) has been employed in many applications for identifying latent patterns, prediction, and recommendation.
In this paper, a computationally efficient N-CMTF factorization algorithm is presented based on the column-wise element selection, preventing frequent gradient updates.
arXiv Detail & Related papers (2020-03-07T03:34:53Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.