Universal properties of anyon braiding on one-dimensional wire networks
- URL: http://arxiv.org/abs/2007.01207v2
- Date: Tue, 10 Nov 2020 12:24:14 GMT
- Title: Universal properties of anyon braiding on one-dimensional wire networks
- Authors: Tomasz Maci\k{a}\.zek and Byung Hee An
- Abstract summary: We show that anyons on wire networks have fundamentally different braiding properties than anyons in 2D.
The character of braiding depends on the topological invariant called the connectedness of the network.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We demonstrate that anyons on wire networks have fundamentally different
braiding properties than anyons in 2D. Our analysis reveals an unexpectedly
wide variety of possible non-abelian braiding behaviours on networks. The
character of braiding depends on the topological invariant called the
connectedness of the network. As one of our most striking consequences,
particles on modular networks can change their statistical properties when
moving between different modules. However, sufficiently highly connected
networks already reproduce braiding properties of 2D systems. Our analysis is
fully topological and independent on the physical model of anyons.
Related papers
- Riemannian Residual Neural Networks [58.925132597945634]
We show how to extend the residual neural network (ResNet)
ResNets have become ubiquitous in machine learning due to their beneficial learning properties, excellent empirical results, and easy-to-incorporate nature when building varied neural networks.
arXiv Detail & Related papers (2023-10-16T02:12:32Z) - SEGNO: Generalizing Equivariant Graph Neural Networks with Physical
Inductive Biases [66.61789780666727]
We show how the second-order continuity can be incorporated into GNNs while maintaining the equivariant property.
We also offer theoretical insights into SEGNO, highlighting that it can learn a unique trajectory between adjacent states.
Our model yields a significant improvement over the state-of-the-art baselines.
arXiv Detail & Related papers (2023-08-25T07:15:58Z) - On Privileged and Convergent Bases in Neural Network Representations [7.888192939262696]
We show that even in wide networks such as WideResNets, neural networks do not converge to a unique basis.
We also analyze Linear Mode Connectivity, which has been studied as a measure of basis correlation.
arXiv Detail & Related papers (2023-07-24T17:11:39Z) - Feature-Learning Networks Are Consistent Across Widths At Realistic
Scales [72.27228085606147]
We study the effect of width on the dynamics of feature-learning neural networks across a variety of architectures and datasets.
Early in training, wide neural networks trained on online data have not only identical loss curves but also agree in their point-wise test predictions throughout training.
We observe, however, that ensembles of narrower networks perform worse than a single wide network.
arXiv Detail & Related papers (2023-05-28T17:09:32Z) - Extending the planar theory of anyons to quantum wire networks [0.0]
We establish graph-braided anyon fusion models for general wire networks.
In particular, we prove that triconnected networks yield the same braiding exchange operators as the planar anyon models.
We conjecture that the graph-braided anyon fusion models will possess the (generalised) coherence property.
arXiv Detail & Related papers (2023-01-16T20:13:27Z) - Curvature-informed multi-task learning for graph networks [56.155331323304]
State-of-the-art graph neural networks attempt to predict multiple properties simultaneously.
We investigate a potential explanation for this phenomenon: the curvature of each property's loss surface significantly varies, leading to inefficient learning.
arXiv Detail & Related papers (2022-08-02T18:18:41Z) - Randomly Initialized One-Layer Neural Networks Make Data Linearly
Separable [1.2277343096128712]
Given sufficient width, a randomly one-layer neural network can transform two sets into two linearly separable sets without any training.
This paper contributes by establishing that, given sufficient width, a randomly one-layer neural network can transform two sets into two linearly separable sets without any training.
arXiv Detail & Related papers (2022-05-24T01:38:43Z) - Learning distinct features helps, provably [98.78384185493624]
We study the diversity of the features learned by a two-layer neural network trained with the least squares loss.
We measure the diversity by the average $L$-distance between the hidden-layer features.
arXiv Detail & Related papers (2021-06-10T19:14:45Z) - Problems of representation of electrocardiograms in convolutional neural
networks [58.720142291102135]
We show that these problems are systemic in nature.
They are due to how convolutional networks work with composite objects, parts of which are not fixed rigidly, but have significant mobility.
arXiv Detail & Related papers (2020-12-01T14:02:06Z) - Modeling the Evolution of Networks as Shrinking Structural Diversity [0.0]
This article reviews and evaluates models of network evolution based on the notion of structural diversity.
We show that diversity is an underlying theme of three principles of network evolution: the preferential attachment model, connectivity and link prediction.
arXiv Detail & Related papers (2020-09-21T11:30:07Z) - Transmission and navigation on disordered lattice networks, directed
spanning forests and Brownian web [2.0305676256390934]
In this work, we investigate the geometry of networks based on randomly perturbed lattices based on spatially dependent point fields.
In the regime of low disorder, we show in 2D and 3D that the DSF almost surely consists of a single tree.
In 2D, we further establish that the DSF, as a collection of paths, converges under diffusive scaling to the Brownian web.
arXiv Detail & Related papers (2020-02-17T11:45:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.