Regularized scheme of time evolution tensor network algorithms
- URL: http://arxiv.org/abs/2208.03436v1
- Date: Sat, 6 Aug 2022 03:38:37 GMT
- Title: Regularized scheme of time evolution tensor network algorithms
- Authors: Li-Xiang Cen
- Abstract summary: Regularized factorization is proposed to simulate time evolution for quantum lattice systems.
The resulting compact structure of the propagator indicates a high-order Baker-Campbell-Hausdorff series.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Regularized factorization is proposed to simulate time evolution for quantum
lattice systems. Transcending the Trotter decomposition, the resulting compact
structure of the propagator indicates a high-order Baker-Campbell-Hausdorff
series. Regularized scheme of tensor network algorithms is then developed to
determine the ground state energy for spin lattice systems with Heisenberg or
Kitaev-type interactions. Benchmark calculations reveal two distinct merits of
the regularized algorithm: it has stable convergence, immune to the bias even
in applying the simple update method to the Kitaev spin liquid; contraction of
the produced tensor network can converge rapidly with much lower computing
cost, relaxing the bottleneck to calculate the physical expectation value.
Related papers
- Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Tensor Network Representation and Entanglement Spreading in Many-Body
Localized Systems: A Novel Approach [0.0]
A novel method has been devised to compute the Local Integrals of Motion for a one-dimensional many-body localized system.
A class of optimal unitary transformations is deduced in a tensor-network formalism to diagonalize the Hamiltonian of the specified system.
The efficiency of the method was assessed and found to be both fast and almost accurate.
arXiv Detail & Related papers (2023-12-13T14:28:45Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram
Iteration [122.51142131506639]
We introduce a precise, fast, and differentiable upper bound for the spectral norm of convolutional layers using circulant matrix theory.
We show through a comprehensive set of experiments that our approach outperforms other state-of-the-art methods in terms of precision, computational cost, and scalability.
It proves highly effective for the Lipschitz regularization of convolutional neural networks, with competitive results against concurrent approaches.
arXiv Detail & Related papers (2023-05-25T15:32:21Z) - Binarizing Sparse Convolutional Networks for Efficient Point Cloud
Analysis [93.55896765176414]
We propose binary sparse convolutional networks called BSC-Net for efficient point cloud analysis.
We employ the differentiable search strategies to discover the optimal opsitions for active site matching in the shifted sparse convolution.
Our BSC-Net achieves significant improvement upon our srtong baseline and outperforms the state-of-the-art network binarization methods.
arXiv Detail & Related papers (2023-03-27T13:47:06Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - Automatic structural optimization of tree tensor networks [0.0]
We propose a TTN algorithm that enables us to automatically optimize the network structure by local reconnections of isometries.
We demonstrate that the entanglement structure embedded in the ground-state of the system can be efficiently visualized as a perfect binary tree in the optimized TTN.
arXiv Detail & Related papers (2022-09-07T14:51:39Z) - Coordinate descent on the orthogonal group for recurrent neural network
training [9.886326127330337]
We show that the algorithm rotates two columns of the recurrent matrix, an operation that can be efficiently implemented as a multiplication by a Givens matrix.
Experiments on a benchmark recurrent neural network training problem are presented to demonstrate the effectiveness of the proposed algorithm.
arXiv Detail & Related papers (2021-07-30T19:27:11Z) - A Convergence Theory Towards Practical Over-parameterized Deep Neural
Networks [56.084798078072396]
We take a step towards closing the gap between theory and practice by significantly improving the known theoretical bounds on both the network width and the convergence time.
We show that convergence to a global minimum is guaranteed for networks with quadratic widths in the sample size and linear in their depth at a time logarithmic in both.
Our analysis and convergence bounds are derived via the construction of a surrogate network with fixed activation patterns that can be transformed at any time to an equivalent ReLU network of a reasonable size.
arXiv Detail & Related papers (2021-01-12T00:40:45Z) - Optimization schemes for unitary tensor-network circuit [0.0]
We discuss the variational optimization of a unitary tensor-network circuit with different network structures.
The ansatz is performed based on a generalization of well-developed multi-scale entanglement renormalization algorithm.
We present the benchmarking calculations for different network structures.
arXiv Detail & Related papers (2020-09-05T21:57:28Z) - Efficient variational contraction of two-dimensional tensor networks
with a non-trivial unit cell [0.0]
tensor network states provide an efficient class of states that faithfully capture strongly correlated quantum models and systems.
We generalize a recently proposed variational uniform matrix product state algorithm for capturing one-dimensional quantum lattices.
A key property of the algorithm is a computational effort that scales linearly rather than exponentially in the size of the unit cell.
arXiv Detail & Related papers (2020-03-02T19:01:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.