Neuralized Fermionic Tensor Networks for Quantum Many-Body Systems
- URL: http://arxiv.org/abs/2506.08329v2
- Date: Mon, 23 Jun 2025 18:34:38 GMT
- Title: Neuralized Fermionic Tensor Networks for Quantum Many-Body Systems
- Authors: Si-Jing Du, Garnet Kin-Lic Chan,
- Abstract summary: We describe a class of neuralized fermionic tensor network states (NN-fTNS)<n>NN-fTNS introduce non-linearity into fermionic tensor networks through configuration-dependent neural network transformations of the local tensors.<n>Compared to existing fermionic neural quantum states (NQS), NN-fTNS offer a physically motivated alternative fermionic structure.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We describe a class of neuralized fermionic tensor network states (NN-fTNS) that introduce non-linearity into fermionic tensor networks through configuration-dependent neural network transformations of the local tensors. The construction uses the fTNS algebra to implement a natural fermionic sign structure and is compatible with standard tensor network algorithms, but gains enhanced expressivity through the neural network parametrization. Using the 1D and 2D Fermi-Hubbard models as benchmarks, we demonstrate that NN-fTNS achieve order of magnitude improvements in the ground-state energy compared to pure fTNS with the same bond dimension, and can be systematically improved through both the tensor network bond dimension and the neural network parametrization. Compared to existing fermionic neural quantum states (NQS) based on Slater determinants and Pfaffians, NN-fTNS offer a physically motivated alternative fermionic structure. Furthermore, compared to such states, NN-fTNS naturally exhibit improved computational scaling and we demonstrate a construction that achieves linear scaling with the lattice size.
Related papers
- Novel Kernel Models and Exact Representor Theory for Neural Networks Beyond the Over-Parameterized Regime [52.00917519626559]
This paper presents two models of neural-networks and their training applicable to neural networks of arbitrary width, depth and topology.
We also present an exact novel representor theory for layer-wise neural network training with unregularized gradient descent in terms of a local-extrinsic neural kernel (LeNK)
This representor theory gives insight into the role of higher-order statistics in neural network training and the effect of kernel evolution in neural-network kernel models.
arXiv Detail & Related papers (2024-05-24T06:30:36Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Equivariant Matrix Function Neural Networks [1.8717045355288808]
We introduce Matrix Function Neural Networks (MFNs), a novel architecture that parameterizes non-local interactions through analytic matrix equivariant functions.
MFNs is able to capture intricate non-local interactions in quantum systems, paving the way to new state-of-the-art force fields.
arXiv Detail & Related papers (2023-10-16T14:17:00Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - Variational Tensor Neural Networks for Deep Learning [0.0]
We propose an integration of tensor networks (TN) into deep neural networks (NNs)
This in turn, results in a scalable tensor neural network (TNN) architecture capable of efficient training over a large parameter space.
We validate the accuracy and efficiency of our method by designing TNN models and providing benchmark results for linear and non-linear regressions, data classification and image recognition on MNIST handwritten digits.
arXiv Detail & Related papers (2022-11-26T20:24:36Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - Analysis of Structured Deep Kernel Networks [0.0]
We show that the use of special types of kernels yields models reminiscent of neural networks founded in the same theoretical framework of classical kernel methods.<n> Especially the introduced Structured Deep Kernel Networks (SDKNs) can be viewed as unbounded neural networks (NNs) with optimizable activation functions obeying a representer theorem.
arXiv Detail & Related papers (2021-05-15T14:10:35Z) - Connecting Weighted Automata, Tensor Networks and Recurrent Neural
Networks through Spectral Learning [58.14930566993063]
We present connections between three models used in different research fields: weighted finite automata(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks.
We introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous vectors input.
arXiv Detail & Related papers (2020-10-19T15:28:00Z) - Automatic Cross-Domain Transfer Learning for Linear Regression [0.0]
This paper helps to extend the capability of transfer learning for linear regression problems.<n>For normal datasets, we assume that some latent domain information is available for transfer learning.
arXiv Detail & Related papers (2020-05-08T15:05:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.