Learning tensor trains from noisy functions with application to quantum simulation
- URL: http://arxiv.org/abs/2405.12730v1
- Date: Tue, 21 May 2024 12:36:53 GMT
- Title: Learning tensor trains from noisy functions with application to quantum simulation
- Authors: Kohtaroh Sakaue, Hiroshi Shinaoka, Rihito Sakurai,
- Abstract summary: We propose a new method that starts with an initial guess of TT and optimize it using non-linear least-squares.
We employ this optimized TT of the correlation function in quantum simulation based on pseudo-imaginary-time evolution.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tensor cross interpolation (TCI) is a powerful technique for learning a tensor train (TT) by adaptively sampling a target tensor based on an interpolation formula. However, when the tensor evaluations contain random noise, optimizing the TT is more advantageous than interpolating the noise. Here, we propose a new method that starts with an initial guess of TT and optimizes it using non-linear least-squares by fitting it to measured points obtained from TCI. We use quantics TCI (QTCI) in this method and demonstrate its effectiveness on sine and two-time correlation functions, with each evaluated with random noise. The resulting TT exhibits increased robustness against noise compared to the QTCI method. Furthermore, we employ this optimized TT of the correlation function in quantum simulation based on pseudo-imaginary-time evolution, resulting in ground-state energy with higher accuracy than the QTCI or Monte Carlo methods.
Related papers
- Adaptive variational quantum dynamics simulations with compressed circuits and fewer measurements [4.2643127089535104]
We show an improved version of the adaptive variational quantum dynamics simulation (AVQDS) method, which we call AVQDS(T)
The algorithm adaptively adds layers of disjoint unitary gates to the ansatz circuit so as to keep the McLachlan distance, a measure of the accuracy of the variational dynamics, below a fixed threshold.
We also show a method based on eigenvalue truncation to solve the linear equations of motion for the variational parameters with enhanced noise resilience.
arXiv Detail & Related papers (2024-08-13T02:56:43Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - D4FT: A Deep Learning Approach to Kohn-Sham Density Functional Theory [79.50644650795012]
We propose a deep learning approach to solve Kohn-Sham Density Functional Theory (KS-DFT)
We prove that such an approach has the same expressivity as the SCF method, yet reduces the computational complexity.
In addition, we show that our approach enables us to explore more complex neural-based wave functions.
arXiv Detail & Related papers (2023-03-01T10:38:10Z) - Multi-mode Tensor Train Factorization with Spatial-spectral
Regularization for Remote Sensing Images Recovery [1.3272510644778104]
We propose a novel low-MTT-rank tensor completion model via multi-mode TT factorization and spatial-spectral smoothness regularization.
We show that the proposed MTTD3R method outperforms compared methods in terms of visual and quantitative measures.
arXiv Detail & Related papers (2022-05-05T07:36:08Z) - Tensor-Train Split Operator KSL (TT-SOKSL) Method for Quantum Dynamics
Simulations [0.0]
We introduce the tensor-train split-operator KSL (TT-SOKSL) method for quantum simulations in tensor-train (TT)/matrix product state (MPS) representations.
We demonstrate the accuracy and efficiency of TT-SOKSL as applied to simulations of the photoisomerization of the retinal chromophore in rhodopsin.
arXiv Detail & Related papers (2022-03-01T15:12:10Z) - Provable Tensor-Train Format Tensor Completion by Riemannian
Optimization [22.166436026482984]
We provide the first theoretical guarantees of the convergence of RGrad algorithm for TT-format tensor completion.
We also propose a novel approach, referred to as the sequential second-order moment method.
arXiv Detail & Related papers (2021-08-27T08:13:58Z) - Spectral Tensor Train Parameterization of Deep Learning Layers [136.4761580842396]
We study low-rank parameterizations of weight matrices with embedded spectral properties in the Deep Learning context.
We show the effects of neural network compression in the classification setting and both compression and improved stability training in the generative adversarial training setting.
arXiv Detail & Related papers (2021-03-07T00:15:44Z) - Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient
Clipping [69.9674326582747]
We propose a new accelerated first-order method called clipped-SSTM for smooth convex optimization with heavy-tailed distributed noise in gradients.
We prove new complexity that outperform state-of-the-art results in this case.
We derive the first non-trivial high-probability complexity bounds for SGD with clipping without light-tails assumption on the noise.
arXiv Detail & Related papers (2020-05-21T17:05:27Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z) - A Unified Framework for Coupled Tensor Completion [42.19293115131073]
Coupled tensor decomposition reveals the joint data structure by incorporating priori knowledge that come from the latent coupled factors.
The TR has powerful expression ability and achieves success in some multi-dimensional data processing applications.
The proposed method is validated on numerical experiments on synthetic data, and experimental results on real-world data demonstrate its superiority over the state-of-the-art methods in terms of recovery accuracy.
arXiv Detail & Related papers (2020-01-09T02:15:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.