Convolutional neural network based decoders for surface codes
- URL: http://arxiv.org/abs/2312.03508v1
- Date: Wed, 6 Dec 2023 14:07:31 GMT
- Title: Convolutional neural network based decoders for surface codes
- Authors: Simone Bordoni and Stefano Giagu
- Abstract summary: This work reports a study of decoders based on convolutional neural networks, tested on different code distances and noise models.
The results show that decoders based on convolutional neural networks have good performance and can adapt to different noise models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The decoding of error syndromes of surface codes with classical algorithms
may slow down quantum computation. To overcome this problem it is possible to
implement decoding algorithms based on artificial neural networks. This work
reports a study of decoders based on convolutional neural networks, tested on
different code distances and noise models. The results show that decoders based
on convolutional neural networks have good performance and can adapt to
different noise models. Moreover, explainable machine learning techniques have
been applied to the neural network of the decoder to better understand the
behaviour and errors of the algorithm, in order to produce a more robust and
performing algorithm.
Related papers
- Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Learning to Decode the Surface Code with a Recurrent, Transformer-Based
Neural Network [11.566578424972406]
We present a recurrent, transformer-based neural network which learns to decode the surface code, the leading quantum error-correction code.
Our decoder outperforms state-of-the-art algorithmic decoders on real-world data from Google's Sycamore quantum processor for distance 3 and 5 surface codes.
arXiv Detail & Related papers (2023-10-09T17:41:37Z) - Neural network decoder for near-term surface-code experiments [0.7100520098029438]
Neural-network decoders can achieve a lower logical error rate compared to conventional decoders.
These decoders require no prior information about the physical error rates, making them highly adaptable.
arXiv Detail & Related papers (2023-07-06T20:31:25Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Guaranteed Quantization Error Computation for Neural Network Model
Compression [2.610470075814367]
Neural network model compression techniques can address the computation issue of deep neural networks on embedded devices in industrial systems.
A merged neural network is built from a feedforward neural network and its quantized version to produce the exact output difference between two neural networks.
arXiv Detail & Related papers (2023-04-26T20:21:54Z) - The END: An Equivariant Neural Decoder for Quantum Error Correction [73.4384623973809]
We introduce a data efficient neural decoder that exploits the symmetries of the problem.
We propose a novel equivariant architecture that achieves state of the art accuracy compared to previous neural decoders.
arXiv Detail & Related papers (2023-04-14T19:46:39Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - Predictive Coding: Towards a Future of Deep Learning beyond
Backpropagation? [41.58529335439799]
The backpropagation of error algorithm used to train deep neural networks has been fundamental to the successes of deep learning.
Recent work has developed the idea into a general-purpose algorithm able to train neural networks using only local computations.
We show the substantially greater flexibility of predictive coding networks against equivalent deep neural networks.
arXiv Detail & Related papers (2022-02-18T22:57:03Z) - Achieving Low Complexity Neural Decoders via Iterative Pruning [33.774970857450086]
We consider iterative pruning approaches to prune weights in neural decoders.
Decoders with fewer number of weights can have lower latency and lower complexity.
This will make neural decoders more suitable for mobile and other edge devices with limited computational power.
arXiv Detail & Related papers (2021-12-11T18:33:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.