Hilbert curve vs Hilbert space: exploiting fractal 2D covering to
increase tensor network efficiency
- URL: http://arxiv.org/abs/2105.02239v3
- Date: Tue, 28 Sep 2021 13:24:32 GMT
- Title: Hilbert curve vs Hilbert space: exploiting fractal 2D covering to
increase tensor network efficiency
- Authors: Giovanni Cataldi, Ashkan Abedi, Giuseppe Magnifico, Simone
Notarnicola, Nicola Dalla Pozza, Vittorio Giovannetti, Simone Montangero
- Abstract summary: We present a novel mapping for studying 2D many-body quantum systems.
In particular, we address the problem of choosing an efficient mapping from the 2D lattice to a 1D chain.
We show that the locality-preserving properties of the Hilbert curve leads to a clear improvement of numerical precision.
- Score: 1.2314765641075438
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a novel mapping for studying 2D many-body quantum systems by
solving an effective, one-dimensional long-range model in place of the original
two-dimensional short-range one. In particular, we address the problem of
choosing an efficient mapping from the 2D lattice to a 1D chain that optimally
preserves the locality of interactions within the TN structure. By using Matrix
Product States (MPS) and Tree Tensor Network (TTN) algorithms, we compute the
ground state of the 2D quantum Ising model in transverse field with lattice
size up to $64\times64$, comparing the results obtained from different mappings
based on two space-filling curves, the snake curve and the Hilbert curve. We
show that the locality-preserving properties of the Hilbert curve leads to a
clear improvement of numerical precision, especially for large sizes, and turns
out to provide the best performances for the simulation of 2D lattice systems
via 1D TN structures.
Related papers
- Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - NEAT: Distilling 3D Wireframes from Neural Attraction Fields [52.90572335390092]
This paper studies the problem of structured lineframe junctions using 3D reconstruction segments andFocusing junctions.
ProjectNEAT enjoys the joint neural fields and view without crossart matching from scratch.
arXiv Detail & Related papers (2023-07-14T07:25:47Z) - Two Dimensional Isometric Tensor Networks on an Infinite Strip [1.2569180784533303]
We introduce the class ofisoTNS (isoTNS) for efficient simulation of 2D systems on finite square lattices.
We iteratively transform an infinite MPS representation of a 2D quantum state into a strip isoTNS and investigate the entanglement properties of the resulting state.
Finally, we introduce an infinite time-evolving block decimation algorithm (iTEBDsuperscript2) and use it to approximate the ground state of the 2D transverse field Ising model on lattices of infinite strip geometry.
arXiv Detail & Related papers (2022-11-25T19:00:06Z) - Hilbert Distillation for Cross-Dimensionality Networks [23.700464344728424]
3D convolutional neural networks have revealed superior performance in processing data such as video and medical imaging.
However, the competitive performance by leveraging 3D networks results in huge computational costs.
We propose a novel Hilbert curve-based cross-dimensionality distillation approach to improve the performance of 2D networks.
arXiv Detail & Related papers (2022-11-08T06:25:06Z) - Lattice Convolutional Networks for Learning Ground States of Quantum
Many-Body Systems [33.82764380485598]
We propose lattice convolutions in which a set of proposed operations are used to convert non-square lattices into grid-like augmented lattices.
Based on the proposed lattice convolutions, we design lattice convolutional networks (LCN) that use self-gating and attention mechanisms.
arXiv Detail & Related papers (2022-06-15T08:24:37Z) - Efficient Simulation of Dynamics in Two-Dimensional Quantum Spin Systems
with Isometric Tensor Networks [0.0]
We investigate the computational power of the recently introduced class of isometric tensor network states (isoTNSs)
We discuss several technical details regarding the implementation of isoTNSs-based algorithms and compare different disentanglers.
We compute the dynamical spin structure factor of 2D quantum spin systems for two paradigmatic models.
arXiv Detail & Related papers (2021-12-15T19:00:05Z) - Pure Exploration in Kernel and Neural Bandits [90.23165420559664]
We study pure exploration in bandits, where the dimension of the feature representation can be much larger than the number of arms.
To overcome the curse of dimensionality, we propose to adaptively embed the feature representation of each arm into a lower-dimensional space.
arXiv Detail & Related papers (2021-06-22T19:51:59Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - Efficient and Flexible Approach to Simulate Low-Dimensional Quantum
Lattice Models with Large Local Hilbert Spaces [0.08594140167290096]
We introduce a mapping that allows to construct artificial $U(1)$ symmetries for any type of lattice model.
Exploiting the generated symmetries, numerical expenses that are related to the local degrees of freedom decrease significantly.
Our findings motivate an intuitive physical picture of the truncations occurring in typical algorithms.
arXiv Detail & Related papers (2020-08-19T14:13:56Z) - SeqXY2SeqZ: Structure Learning for 3D Shapes by Sequentially Predicting
1D Occupancy Segments From 2D Coordinates [61.04823927283092]
We propose to represent 3D shapes using 2D functions, where the output of the function at each 2D location is a sequence of line segments inside the shape.
We implement this approach using a Seq2Seq model with attention, called SeqXY2SeqZ, which learns the mapping from a sequence of 2D coordinates along two arbitrary axes to a sequence of 1D locations along the third axis.
Our experiments show that SeqXY2SeqZ outperforms the state-ofthe-art methods under widely used benchmarks.
arXiv Detail & Related papers (2020-03-12T00:24:36Z) - Convex Geometry and Duality of Over-parameterized Neural Networks [70.15611146583068]
We develop a convex analytic approach to analyze finite width two-layer ReLU networks.
We show that an optimal solution to the regularized training problem can be characterized as extreme points of a convex set.
In higher dimensions, we show that the training problem can be cast as a finite dimensional convex problem with infinitely many constraints.
arXiv Detail & Related papers (2020-02-25T23:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.