Enhanced repetition codes for the cross-platform comparison of progress towards fault-tolerance
- URL: http://arxiv.org/abs/2308.08909v2
- Date: Mon, 27 May 2024 10:31:02 GMT
- Title: Enhanced repetition codes for the cross-platform comparison of progress towards fault-tolerance
- Authors: Milan Liepelt, Tommaso Peduzzi, James R. Wootton,
- Abstract summary: Repetition codes have become a commonly used basis of experiments that allow cross-platform comparisons.
Here we propose methods by which repetition code experiments can be expanded and improved.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Achieving fault-tolerance will require a strong relationship between the hardware and the protocols used. Different approaches will therefore naturally have tailored proof-of-principle experiments to benchmark progress. Nevertheless, repetition codes have become a commonly used basis of experiments that allow cross-platform comparisons. Here we propose methods by which repetition code experiments can be expanded and improved, while retaining cross-platform compatibility. We also consider novel methods of analyzing the results, which offer more detailed insights than simple calculation of the logical error rate.
Related papers
- Identifying Causal Direction via Variational Bayesian Compression [6.928582707713723]
A key principle utilized in this task is the algorithmic Markov condition, which postulates that the joint distribution, when factorized according to the causal direction, yields a more succinct codelength compared to the anti-causal direction.<n>We propose leveraging the variational Bayesian learning of neural networks as an interpretation of the codelengths.
arXiv Detail & Related papers (2025-05-12T12:40:15Z) - A Sober Look at Progress in Language Model Reasoning: Pitfalls and Paths to Reproducibility [29.437125712259046]
Reasoning has emerged as the next major frontier for language models (LMs)
We conduct a comprehensive empirical study and find that current mathematical reasoning benchmarks are highly sensitive to subtle implementation choices.
We propose a standardized evaluation framework with clearly defined best practices and reporting standards.
arXiv Detail & Related papers (2025-04-09T17:58:17Z) - Binary Code Similarity Detection via Graph Contrastive Learning on Intermediate Representations [52.34030226129628]
Binary Code Similarity Detection (BCSD) plays a crucial role in numerous fields, including vulnerability detection, malware analysis, and code reuse identification.
In this paper, we propose IRBinDiff, which mitigates compilation differences by leveraging LLVM-IR with higher-level semantic abstraction.
Our extensive experiments, conducted under varied compilation settings, demonstrate that IRBinDiff outperforms other leading BCSD methods in both One-to-one comparison and One-to-many search scenarios.
arXiv Detail & Related papers (2024-10-24T09:09:20Z) - Position: Benchmarking is Limited in Reinforcement Learning Research [33.596940437995904]
This work investigates the sources of increased computation costs in rigorous experiment designs.
We argue for using an additional experimentation paradigm to overcome the limitations of benchmarking.
arXiv Detail & Related papers (2024-06-23T23:36:26Z) - Low-Depth Flag-Style Syndrome Extraction for Small Quantum
Error-Correction Codes [1.2354542488854734]
Flag-style fault-tolerance has become a linchpin in the realization of small fault-tolerant quantum-error correction experiments.
We show that a dynamic choice of stabilizer measurements leads to flag protocols with lower-depth syndrome-extraction circuits.
This work opens the dialogue on exploiting the properties of the full stabilizer group for reducing circuit overhead in fault-tolerant quantum-error correction.
arXiv Detail & Related papers (2023-05-01T12:08:09Z) - MaxMatch: Semi-Supervised Learning with Worst-Case Consistency [149.03760479533855]
We propose a worst-case consistency regularization technique for semi-supervised learning (SSL)
We present a generalization bound for SSL consisting of the empirical loss terms observed on labeled and unlabeled training data separately.
Motivated by this bound, we derive an SSL objective that minimizes the largest inconsistency between an original unlabeled sample and its multiple augmented variants.
arXiv Detail & Related papers (2022-09-26T12:04:49Z) - On the Versatile Uses of Partial Distance Correlation in Deep Learning [47.11577420740119]
This paper revisits a (less widely known) from statistics, called distance correlation (and its partial variant), designed to evaluate correlation between feature spaces of different dimensions.
We describe the steps necessary to carry out its deployment for large scale models.
This opens the door to a surprising array of applications ranging from conditioning one deep model w.r.t. another, learning disentangled representations as well as optimizing diverse models that would directly be more robust to adversarial attacks.
arXiv Detail & Related papers (2022-07-20T06:36:11Z) - Improving Diffusion Models for Inverse Problems using Manifold Constraints [55.91148172752894]
We show that current solvers throw the sample path off the data manifold, and hence the error accumulates.
To address this, we propose an additional correction term inspired by the manifold constraint.
We show that our method is superior to the previous methods both theoretically and empirically.
arXiv Detail & Related papers (2022-06-02T09:06:10Z) - Pipelined correlated minimum weight perfect matching of the surface code [56.01788646782563]
We describe a pipeline approach to decoding the surface code using minimum weight perfect matching.
An independent no-communication parallelizable processing stage reweights the graph according to likely correlations.
A later general stage finishes the matching.
We validate the new algorithm on the fully fault-tolerant toric, unrotated, and rotated surface codes.
arXiv Detail & Related papers (2022-05-19T19:58:02Z) - Benchmarking Deep Models for Salient Object Detection [67.07247772280212]
We construct a general SALient Object Detection (SALOD) benchmark to conduct a comprehensive comparison among several representative SOD methods.
In the above experiments, we find that existing loss functions usually specialized in some metrics but reported inferior results on the others.
We propose a novel Edge-Aware (EA) loss that promotes deep networks to learn more discriminative features by integrating both pixel- and image-level supervision signals.
arXiv Detail & Related papers (2022-02-07T03:43:16Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - STRATA: Simple, Gradient-Free Attacks for Models of Code [7.194523054331424]
We develop a simple and efficient gradient-free method for generating adversarial examples on models of code.
Our method empirically outperforms competing gradient-based methods with less information and less computational effort.
arXiv Detail & Related papers (2020-09-28T18:21:19Z) - An end-to-end approach for the verification problem: learning the right
distance [15.553424028461885]
We augment the metric learning setting by introducing a parametric pseudo-distance, trained jointly with the encoder.
We first show it approximates a likelihood ratio which can be used for hypothesis tests.
We observe training is much simplified under the proposed approach compared to metric learning with actual distances.
arXiv Detail & Related papers (2020-02-21T18:46:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.