Tensor Network Decoding Beyond 2D
- URL: http://arxiv.org/abs/2310.10722v1
- Date: Mon, 16 Oct 2023 18:00:02 GMT
- Title: Tensor Network Decoding Beyond 2D
- Authors: Christophe Piveteau, Christopher T. Chubb, and Joseph M. Renes
- Abstract summary: We introduce several techniques to generalize tensor network decoding to higher dimensions.
We numerically demonstrate that the decoding accuracy of our approach outperforms state-of-the-art decoders on the 3D surface code.
- Score: 2.048226951354646
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decoding algorithms based on approximate tensor network contraction have
proven tremendously successful in decoding 2D local quantum codes such as
surface/toric codes and color codes, effectively achieving optimal decoding
accuracy. In this work, we introduce several techniques to generalize tensor
network decoding to higher dimensions so that it can be applied to 3D codes as
well as 2D codes with noisy syndrome measurements (phenomenological noise or
circuit-level noise). The three-dimensional case is significantly more
challenging than 2D, as the involved approximate tensor contraction is
dramatically less well-behaved than its 2D counterpart. Nonetheless, we
numerically demonstrate that the decoding accuracy of our approach outperforms
state-of-the-art decoders on the 3D surface code, both in the point and loop
sectors, as well as for depolarizing noise. Our techniques could prove useful
in near-term experimental demonstrations of quantum error correction, when
decoding is to be performed offline and accuracy is of utmost importance. To
this end, we show how tensor network decoding can be applied to circuit-level
noise and demonstrate that it outperforms the matching decoder on the rotated
surface code. Our code is available at https://github.com/ChriPiv/tndecoder3d
Related papers
- Learning Linear Block Error Correction Codes [62.25533750469467]
We propose for the first time a unified encoder-decoder training of binary linear block codes.
We also propose a novel Transformer model in which the self-attention masking is performed in a differentiable fashion for the efficient backpropagation of the code gradient.
arXiv Detail & Related papers (2024-05-07T06:47:12Z) - A blockBP decoder for the surface code [0.0]
We present a new decoder for the surface code, which combines the accuracy of the tensor-network decoders with the efficiency and parallelism of the belief-propagation algorithm.
Our decoder is therefore a belief-propagation decoder that works in the degenerate maximal likelihood decoding framework.
arXiv Detail & Related papers (2024-02-07T13:32:32Z) - NDC-Scene: Boost Monocular 3D Semantic Scene Completion in Normalized
Device Coordinates Space [77.6067460464962]
Monocular 3D Semantic Scene Completion (SSC) has garnered significant attention in recent years due to its potential to predict complex semantics and geometry shapes from a single image, requiring no 3D inputs.
We identify several critical issues in current state-of-the-art methods, including the Feature Ambiguity of projected 2D features in the ray to the 3D space, the Pose Ambiguity of the 3D convolution, and the Imbalance in the 3D convolution across different depth levels.
We devise a novel Normalized Device Coordinates scene completion network (NDC-Scene) that directly extends the 2
arXiv Detail & Related papers (2023-09-26T02:09:52Z) - Learning 3D Representations from 2D Pre-trained Models via
Image-to-Point Masked Autoencoders [52.91248611338202]
We propose an alternative to obtain superior 3D representations from 2D pre-trained models via Image-to-Point Masked Autoencoders, named as I2P-MAE.
By self-supervised pre-training, we leverage the well learned 2D knowledge to guide 3D masked autoencoding.
I2P-MAE attains the state-of-the-art 90.11% accuracy, +3.68% to the second-best, demonstrating superior transferable capacity.
arXiv Detail & Related papers (2022-12-13T17:59:20Z) - Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud
Pre-training [56.81809311892475]
Masked Autoencoders (MAE) have shown great potentials in self-supervised pre-training for language and 2D image transformers.
We propose Point-M2AE, a strong Multi-scale MAE pre-training framework for hierarchical self-supervised learning of 3D point clouds.
arXiv Detail & Related papers (2022-05-28T11:22:53Z) - Improved decoding of circuit noise and fragile boundaries of tailored
surface codes [61.411482146110984]
We introduce decoders that are both fast and accurate, and can be used with a wide class of quantum error correction codes.
Our decoders, named belief-matching and belief-find, exploit all noise information and thereby unlock higher accuracy demonstrations of QEC.
We find that the decoders led to a much higher threshold and lower qubit overhead in the tailored surface code with respect to the standard, square surface code.
arXiv Detail & Related papers (2022-03-09T18:48:54Z) - Rate Coding or Direct Coding: Which One is Better for Accurate, Robust,
and Energy-efficient Spiking Neural Networks? [4.872468969809081]
Spiking Neural Networks (SNNs) works focus on an image classification task, therefore various coding techniques have been proposed to convert an image into temporal binary spikes.
Among them, rate coding and direct coding are regarded as prospective candidates for building a practical SNN system.
We conduct a comprehensive analysis of the two codings from three perspectives: accuracy, adversarial robustness, and energy-efficiency.
arXiv Detail & Related papers (2022-01-31T16:18:07Z) - Local tensor-network codes [0.0]
We show how to write some topological codes, including the surface code and colour code, as simple tensor-network codes.
We prove that this method is efficient in the case of holographic codes.
arXiv Detail & Related papers (2021-09-24T14:38:06Z) - Dynamic Neural Representational Decoders for High-Resolution Semantic
Segmentation [98.05643473345474]
We propose a novel decoder, termed dynamic neural representational decoder (NRD)
As each location on the encoder's output corresponds to a local patch of the semantic labels, in this work, we represent these local patches of labels with compact neural networks.
This neural representation enables our decoder to leverage the smoothness prior in the semantic label space, and thus makes our decoder more efficient.
arXiv Detail & Related papers (2021-07-30T04:50:56Z) - General tensor network decoding of 2D Pauli codes [0.0]
We propose a decoder that approximates maximally likelihood decoding for 2D stabiliser and subsystem codes subject to Pauli noise.
We numerically demonstrate the power of this decoder by studying four classes of codes under three noise models.
We show that the thresholds yielded by our decoder are state-of-the-art, and numerically consistent with optimal thresholds where available.
arXiv Detail & Related papers (2021-01-11T19:00:03Z) - Tensor-network codes [0.0]
We introduce tensor-network stabilizer codes which come with a natural tensor-network decoder.
We generalize holographic codes beyond those constructed from perfect or block-perfect isometries.
For holographic codes exact the tensor-network decoder is efficient with a complexity that is in the number of physical qubits.
arXiv Detail & Related papers (2020-09-22T05:44:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.