Adaptive-weighted tree tensor networks for disordered quantum many-body
systems
- URL: http://arxiv.org/abs/2111.12398v2
- Date: Mon, 6 Jun 2022 14:35:11 GMT
- Title: Adaptive-weighted tree tensor networks for disordered quantum many-body
systems
- Authors: Giovanni Ferrari, Giuseppe Magnifico, Simone Montangero
- Abstract summary: We introduce an adaptive-weighted tree tensor network, for the study of disordered and inhomogeneous quantum many-body systems.
We compute the ground state of the two-dimensional quantum Ising model in the presence of quenched random disorder and frustration.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce an adaptive-weighted tree tensor network, for the study of
disordered and inhomogeneous quantum many-body systems. This ansatz is
assembled on the basis of the random couplings of the physical system with a
procedure that considers a tunable weight parameter to prevent completely
unbalanced trees. Using this approach, we compute the ground state of the
two-dimensional quantum Ising model in the presence of quenched random disorder
and frustration, with lattice size up to $32 \times 32$. We compare the results
with the ones obtained using the standard homogeneous tree tensor networks and
the completely self-assembled tree tensor networks, demonstrating a clear
improvement of numerical precision as a function of the weight parameter,
especially for large system sizes.
Related papers
- The Augmented Tree Tensor Network Cookbook [0.0]
An augmented tree tensor network (aTTN) is a tensor network ansatz constructed by applying a layer of unitary disentanglers to a tree tensor network.<n>These lecture notes serve as a detailed guide for implementing the aTTN algorithms.
arXiv Detail & Related papers (2025-07-28T18:00:39Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Network reconstruction via the minimum description length principle [0.0]
We propose an alternative nonparametric regularization scheme based on hierarchical Bayesian inference and weight quantization.
Our approach follows the minimum description length (MDL) principle, and uncovers the weight distribution that allows for the most compression of the data.
We demonstrate that our scheme yields systematically increased accuracy in the reconstruction of both artificial and empirical networks.
arXiv Detail & Related papers (2024-05-02T05:35:09Z) - Deep Neural Networks as Variational Solutions for Correlated Open
Quantum Systems [0.0]
We show that parametrizing the density matrix directly with more powerful models can yield better variational ansatz functions.
We present results for the dissipative one-dimensional transverse-field Ising model and a two-dimensional dissipative Heisenberg model.
arXiv Detail & Related papers (2024-01-25T13:41:34Z) - Improving equilibrium propagation without weight symmetry through Jacobian homeostasis [7.573586022424398]
Equilibrium propagation (EP) is a compelling alternative to the backpropagation of error algorithm (BP)
EP requires weight symmetry and infinitesimal equilibrium perturbations, i.e., nudges, to estimate unbiased gradients efficiently.
We show that the finite nudge does not pose a problem, as exact derivatives can still be estimated via a Cauchy integral.
We present a new homeostatic objective that directly mitigates functional asymmetries of the Jacobian at the network's fixed point.
arXiv Detail & Related papers (2023-09-05T13:20:43Z) - Vertical Layering of Quantized Neural Networks for Heterogeneous
Inference [57.42762335081385]
We study a new vertical-layered representation of neural network weights for encapsulating all quantized models into a single one.
We can theoretically achieve any precision network for on-demand service while only needing to train and maintain one model.
arXiv Detail & Related papers (2022-12-10T15:57:38Z) - Understanding Weight Similarity of Neural Networks via Chain
Normalization Rule and Hypothesis-Training-Testing [58.401504709365284]
We present a weight similarity measure that can quantify the weight similarity of non-volution neural networks.
We first normalize the weights of neural networks by a chain normalization rule, which is used to introduce weight-training representation learning.
We extend traditional hypothesis-testing method to validate the hypothesis on the weight similarity of neural networks.
arXiv Detail & Related papers (2022-08-08T19:11:03Z) - A Multisite Decomposition of the Tensor Network Path Integrals [0.0]
We extend the tensor network path integral (TNPI) framework to efficiently simulate quantum systems with local dissipative environments.
The MS-TNPI method is useful for studying a variety of extended quantum systems coupled with solvents.
arXiv Detail & Related papers (2021-09-20T17:55:53Z) - Sparse Uncertainty Representation in Deep Learning with Inducing Weights [22.912675044223302]
We extend Matheron's conditional Gaussian sampling rule to enable fast weight sampling, which enables our inference method to maintain reasonable run-time as compared with ensembles.
Our approach achieves competitive performance to the state-of-the-art in prediction and uncertainty estimation tasks with fully connected neural networks and ResNets.
arXiv Detail & Related papers (2021-05-30T18:17:47Z) - Sampling asymmetric open quantum systems for artificial neural networks [77.34726150561087]
We present a hybrid sampling strategy which takes asymmetric properties explicitly into account, achieving fast convergence times and high scalability for asymmetric open systems.
We highlight the universal applicability of artificial neural networks, underlining the universal applicability of neural networks.
arXiv Detail & Related papers (2020-12-20T18:25:29Z) - Improve Generalization and Robustness of Neural Networks via Weight
Scale Shifting Invariant Regularizations [52.493315075385325]
We show that a family of regularizers, including weight decay, is ineffective at penalizing the intrinsic norms of weights for networks with homogeneous activation functions.
We propose an improved regularizer that is invariant to weight scale shifting and thus effectively constrains the intrinsic norm of a neural network.
arXiv Detail & Related papers (2020-08-07T02:55:28Z) - T-Basis: a Compact Representation for Neural Networks [89.86997385827055]
We introduce T-Basis, a concept for a compact representation of a set of tensors, each of an arbitrary shape, which is often seen in Neural Networks.
We evaluate the proposed approach on the task of neural network compression and demonstrate that it reaches high compression rates at acceptable performance drops.
arXiv Detail & Related papers (2020-07-13T19:03:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.