Union-find quantum decoding without union-find
- URL: http://arxiv.org/abs/2306.09767v3
- Date: Mon, 8 Jan 2024 14:42:01 GMT
- Title: Union-find quantum decoding without union-find
- Authors: Sam J. Griffiths and Dan E. Browne
- Abstract summary: We show that the behaviour of the decoder at scale underutilises the data structure.
Improvements and simplifications can be made to architectural designs to reduce resource overhead in practice.
This yields a linear-time worst-case complexity for the decoder at scale, even with a naive implementation omitting popular optimisations.
- Score: 4.24243593213882
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The union-find decoder is a leading algorithmic approach to the correction of
quantum errors on the surface code, achieving code thresholds comparable to
minimum-weight perfect matching (MWPM) with amortised computational time
scaling near-linearly in the number of physical qubits. This complexity is
achieved via optimisations provided by the disjoint-set data structure. We
demonstrate, however, that the behaviour of the decoder at scale underutilises
this data structure for twofold analytic and algorithmic reasons, and that
improvements and simplifications can be made to architectural designs to reduce
resource overhead in practice. To reinforce this, we model the behaviour of
erasure clusters formed by the decoder and show that there does not exist a
percolation threshold within the data structure for any mode of operation. This
yields a linear-time worst-case complexity for the decoder at scale, even with
a naive implementation omitting popular optimisations.
Related papers
- Accelerating Error Correction Code Transformers [56.75773430667148]
We introduce a novel acceleration method for transformer-based decoders.
We achieve a 90% compression ratio and reduce arithmetic operation energy consumption by at least 224 times on modern hardware.
arXiv Detail & Related papers (2024-10-08T11:07:55Z) - Robust Clustering on High-Dimensional Data with Stochastic Quantization [0.0]
This paper addresses the limitations of conventional vector quantization algorithms.
It investigates the Quantization (SQ) as an alternative for high-dimensionality computation.
arXiv Detail & Related papers (2024-09-03T17:13:55Z) - Efficient near-optimal decoding of the surface code through ensembling [1.7272067657096395]
Harmonized ensembles of MWPM-based decoders achieve lower logical error rates than their individual counterparts.
We conclude that harmonization provides a viable path towards highly accurate real-time decoding.
arXiv Detail & Related papers (2024-01-23T02:01:33Z) - CORE: Common Random Reconstruction for Distributed Optimization with
Provable Low Communication Complexity [110.50364486645852]
Communication complexity has become a major bottleneck for speeding up training and scaling up machine numbers.
We propose Common Om REOm, which can be used to compress information transmitted between machines.
arXiv Detail & Related papers (2023-09-23T08:45:27Z) - The END: An Equivariant Neural Decoder for Quantum Error Correction [73.4384623973809]
We introduce a data efficient neural decoder that exploits the symmetries of the problem.
We propose a novel equivariant architecture that achieves state of the art accuracy compared to previous neural decoders.
arXiv Detail & Related papers (2023-04-14T19:46:39Z) - Supervised Dimensionality Reduction and Classification with
Convolutional Autoencoders [1.1164202369517053]
A Convolutional Autoencoder is combined to simultaneously produce supervised dimensionality reduction and predictions.
The resulting Latent Space can be utilized to improve traditional, interpretable classification algorithms.
The proposed methodology introduces advanced explainability regarding, not only the data structure through the produced latent space, but also about the classification behaviour.
arXiv Detail & Related papers (2022-08-25T15:18:33Z) - Optimization-based Block Coordinate Gradient Coding for Mitigating
Partial Stragglers in Distributed Learning [58.91954425047425]
This paper aims to design a new gradient coding scheme for mitigating partial stragglers in distributed learning.
We propose a gradient coordinate coding scheme with L coding parameters representing L possibly different diversities for the L coordinates, which generates most gradient coding schemes.
arXiv Detail & Related papers (2022-06-06T09:25:40Z) - Efficient Micro-Structured Weight Unification and Pruning for Neural
Network Compression [56.83861738731913]
Deep Neural Network (DNN) models are essential for practical applications, especially for resource limited devices.
Previous unstructured or structured weight pruning methods can hardly truly accelerate inference.
We propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration.
arXiv Detail & Related papers (2021-06-15T17:22:59Z) - A Learning-Based Approach to Address Complexity-Reliability Tradeoff in
OS Decoders [32.35297363281744]
We show that using artificial neural networks to predict the required order of an ordered statistics based decoder helps in reducing the average complexity and hence the latency of the decoder.
arXiv Detail & Related papers (2021-03-05T18:22:20Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.