Determination of the Semion Code Threshold using Neural Decoders
- URL: http://arxiv.org/abs/2002.08666v2
- Date: Sat, 12 Sep 2020 12:22:40 GMT
- Title: Determination of the Semion Code Threshold using Neural Decoders
- Authors: Santiago Varona and Miguel Angel Martin-Delgado
- Abstract summary: We compute the error threshold for the semion code, the companion of the Kitaev toric code with the same gauge symmetry group $mathbbZ$.
We take advantage of the near-optimal performance of some neural network decoders: multilayer perceptrons and convolutional neural networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We compute the error threshold for the semion code, the companion of the
Kitaev toric code with the same gauge symmetry group $\mathbb{Z}_2$. The
application of statistical mechanical mapping methods is highly discouraged for
the semion code, since the code is non-Pauli and non-CSS. Thus, we use machine
learning methods, taking advantage of the near-optimal performance of some
neural network decoders: multilayer perceptrons and convolutional neural
networks (CNNs). We find the values $p_{\text {eff}}=9.5\%$ for uncorrelated
bit-flip and phase-flip noise, and $p_{\text {eff}}=10.5\%$ for depolarizing
noise. We contrast these values with a similar analysis of the Kitaev toric
code on a hexagonal lattice with the same methods. For convolutional neural
networks, we use the ResNet architecture, which allows us to implement very
deep networks and results in better performance and scalability than the
multilayer perceptron approach. We analyze and compare in detail both
approaches and provide a clear argument favoring the CNN as the best suited
numerical method for the semion code.
Related papers
- LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Far from Perfect: Quantum Error Correction with (Hyperinvariant) Evenbly Codes [38.729065908701585]
We introduce a new class of qubit codes that we call Evenbly codes.
Our work indicates that Evenbly codes may show promise for practical quantum computing applications.
arXiv Detail & Related papers (2024-07-16T17:18:13Z) - Low PAPR MIMO-OFDM Design Based on Convolutional Autoencoder [20.544993155126967]
A new framework for peak-to-average power ratio ($mathsfPAPR$) reduction and waveform design is presented.
A convolutional-autoencoder ($mathsfCAE$) architecture is presented.
We show that a single trained model covers the tasks of $mathsfPAPR$ reduction, spectrum design, and $mathsfMIMO$ detection together over a wide range of SNR levels.
arXiv Detail & Related papers (2023-01-11T11:35:10Z) - Training Overparametrized Neural Networks in Sublinear Time [14.918404733024332]
Deep learning comes at a tremendous computational and energy cost.
We present a new and a subset of binary neural networks, as a small subset of search trees, where each corresponds to a subset of search trees (Ds)
We believe this view would have further applications in analysis analysis of deep networks (Ds)
arXiv Detail & Related papers (2022-08-09T02:29:42Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Deep ensembles in bioimage segmentation [74.01883650587321]
In this work, we propose an ensemble of convolutional neural networks (CNNs)
In ensemble methods, many different models are trained and then used for classification, the ensemble aggregates the outputs of the single classifiers.
The proposed ensemble is implemented by combining different backbone networks using the DeepLabV3+ and HarDNet environment.
arXiv Detail & Related papers (2021-12-24T05:54:21Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - Probabilistic Robustness Analysis for DNNs based on PAC Learning [14.558877524991752]
We view a DNN as a function $boldsymbolf$ from inputs to outputs, and consider the local robustness property for a given input.
We learn the score difference function $f_i-f_ell$ with respect to the target label $ell$ and attacking label $i$.
Our framework can handle very large neural networks like ResNet152 with $6.5$M neurons, and often generates adversarial examples.
arXiv Detail & Related papers (2021-01-25T14:10:52Z) - Towards Understanding Hierarchical Learning: Benefits of Neural
Representations [160.33479656108926]
In this work, we demonstrate that intermediate neural representations add more flexibility to neural networks.
We show that neural representation can achieve improved sample complexities compared with the raw input.
Our results characterize when neural representations are beneficial, and may provide a new perspective on why depth is important in deep learning.
arXiv Detail & Related papers (2020-06-24T02:44:54Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.