From Tensor Network Quantum States to Tensorial Recurrent Neural
Networks
- URL: http://arxiv.org/abs/2206.12363v1
- Date: Fri, 24 Jun 2022 16:25:36 GMT
- Title: From Tensor Network Quantum States to Tensorial Recurrent Neural
Networks
- Authors: Dian Wu, Riccardo Rossi, Filippo Vicentini, Giuseppe Carleo
- Abstract summary: We show that any matrix product state (MPS) can be exactly represented by a recurrent neural network (RNN) with a linear memory update.
We generalize this RNN architecture to 2D lattices using a multilinear memory update.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We show that any matrix product state (MPS) can be exactly represented by a
recurrent neural network (RNN) with a linear memory update. We generalize this
RNN architecture to 2D lattices using a multilinear memory update. It supports
perfect sampling and wave function evaluation in polynomial time, and can
represent an area law of entanglement entropy. Numerical evidence shows that it
can encode the wave function using a bond dimension lower by orders of
magnitude when compared to MPS, with an accuracy that can be systematically
improved by increasing the bond dimension.
Related papers
- Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - GaborPINN: Efficient physics informed neural networks using
multiplicative filtered networks [0.0]
Physics-informed neural networks (PINNs) provide functional wavefield solutions represented by neural networks (NNs)
We propose a modified PINN using multiplicative filtered networks, which embeds some of the known characteristics of the wavefield in training.
The proposed method achieves up to a two-magnitude increase in the speed of convergence as compared with conventional PINNs.
arXiv Detail & Related papers (2023-08-10T19:51:00Z) - A predictive physics-aware hybrid reduced order model for reacting flows [65.73506571113623]
A new hybrid predictive Reduced Order Model (ROM) is proposed to solve reacting flow problems.
The number of degrees of freedom is reduced from thousands of temporal points to a few POD modes with their corresponding temporal coefficients.
Two different deep learning architectures have been tested to predict the temporal coefficients.
arXiv Detail & Related papers (2023-01-24T08:39:20Z) - Supplementing Recurrent Neural Network Wave Functions with Symmetry and
Annealing to Improve Accuracy [0.7234862895932991]
Recurrent neural networks (RNNs) are a class of neural networks that have emerged from the paradigm of artificial intelligence.
We show that our method is superior to Density Matrix Renormalisation Group (DMRG) for system sizes larger than or equal to $14 times 14$ on the triangular lattice.
arXiv Detail & Related papers (2022-07-28T18:00:03Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Data-Driven Time Propagation of Quantum Systems with Neural Networks [0.0]
We investigate the potential of supervised machine learning to propagate a quantum system in time.
We show that neural networks can work as time propagators at any time in the future and that they can bed in time forming an autoregression.
arXiv Detail & Related papers (2022-01-27T17:08:30Z) - Learning Wave Propagation with Attention-Based Convolutional Recurrent
Autoencoder Net [0.0]
We present an end-to-end attention-based convolutional recurrent autoencoder (AB-CRAN) network for data-driven modeling of wave propagation phenomena.
We employ a denoising-based convolutional autoencoder from the full-order snapshots given by time-dependent hyperbolic partial differential equations for wave propagation.
The attention-based sequence-to-sequence network increases the time-horizon of prediction by five times compared to the plain RNN-LSTM.
arXiv Detail & Related papers (2022-01-17T20:51:59Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - A novel Deep Neural Network architecture for non-linear system
identification [78.69776924618505]
We present a novel Deep Neural Network (DNN) architecture for non-linear system identification.
Inspired by fading memory systems, we introduce inductive bias (on the architecture) and regularization (on the loss function)
This architecture allows for automatic complexity selection based solely on available data.
arXiv Detail & Related papers (2021-06-06T10:06:07Z) - Memory Capacity of Recurrent Neural Networks with Matrix Representation [1.0978496459260902]
We study a probabilistic notion of memory capacity based on Fisher information for matrix-based neural networks.
We show and analyze the increase in memory capacity for such networks which is introduced when one exhibits an external state memory.
We find an improvement in the performance of Matrix NTMs by the addition of external memory.
arXiv Detail & Related papers (2021-04-11T23:43:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.