Erasure Thresholds for Hyperbolic and Semi-Hyperbolic Surface Codes
- URL: http://arxiv.org/abs/2602.10423v1
- Date: Wed, 11 Feb 2026 02:09:35 GMT
- Title: Erasure Thresholds for Hyperbolic and Semi-Hyperbolic Surface Codes
- Authors: Aygul Azatovna Galimova,
- Abstract summary: We construct 14 hyperbolic CSS surface codes from $8,3$, $10,3$, and $12,3$ tessellations and 11 semi-hyperbolic (fine-grained) codes.<n>We simulate all 25 codes under circuit-level erasure and Pauli noise.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We construct 14 hyperbolic CSS surface codes from $\{8,3\}$, $\{10,3\}$, and $\{12,3\}$ tessellations and 11 semi-hyperbolic (fine-grained) codes. We simulate all 25 codes under circuit-level erasure and Pauli noise. Under circuit-level Pauli noise, pseudothresholds increase with code size within each family ($0.24$--$0.49\%$ for $\{8,3\}$, $0.11$--$0.43\%$ for $\{10,3\}$, $0.07$--$0.13\%$ for $\{12,3\}$). For erasure noise, most codes have $p^*_{\mathrm{E}} > 5\%$. Per-observable family thresholds give erasure-to-Pauli ratios of $2.7$--$3.9\times$ for the base code families. Fine-grained scaling families achieve higher thresholds in both Pauli ($0.67$--$0.68\%$) and erasure ($3.0$--$3.5\%$), with ratios of $4.5$--$5.2\times$. Under phenomenological noise, per-logical $Z$-channel thresholds are ${\sim}2\%$ for $\{8,3\}$ and ${\sim}1\%$ for $\{10,3\}$; the $\{12,3\}$ threshold lies below $0.5\%$.
Related papers
- Hyperbolic and Semi-Hyperbolic Floquet Codes for Photonic Quantum Computing [0.0]
We construct hyperbolic and semi-hyperbolic Floquet codes from $8,3$, $10,3$, and $12,3$ tessellations.<n>The $10,3$ and $12,3$ families are new to hyperbolic Floquet codes.
arXiv Detail & Related papers (2026-02-26T11:51:08Z) - Distributed Hyperbolic Floquet Codes under Depolarizing and Erasure Noise [0.0]
We construct hyperbolic and semi-hyperbolic Floquet codes from $8,3$, $10,3$, and $12,3$ tessellations.<n>The $10,3$ and $12,3$ families are new to hyperbolic Floquet codes.<n>We simulate these distributed codes under four noise models: depolarizing, SDEM3, correlated EM3, and erasure.
arXiv Detail & Related papers (2026-02-20T03:55:23Z) - Approximating the operator norm of local Hamiltonians via few quantum states [53.16156504455106]
Consider a Hermitian operator $A$ acting on a complex Hilbert space of $2n$.<n>We show that when $A$ has small degree in the Pauli expansion, or in other words, $A$ is a local $n$-qubit Hamiltonian.<n>We show that whenever $A$ is $d$-local, textiti.e., $deg(A)le d$, we have the following discretization-type inequality.
arXiv Detail & Related papers (2025-09-15T14:26:11Z) - Sharp Gap-Dependent Variance-Aware Regret Bounds for Tabular MDPs [54.28273395444243]
We show that the Monotonic Value Omega (MVP) algorithm achieves a variance-aware gap-dependent regret bound of $$tildeOleft(left(sum_Delta_h(s,a)>0 fracH2 log K land mathttVar_maxtextc$.
arXiv Detail & Related papers (2025-06-06T20:33:57Z) - Heavy-Tailed Linear Bandits: Huber Regression with One-Pass Update [62.96781471194877]
Two principled strategies for handling heavy-tailed noise, truncation and median-of-means, have been introduced to heavy-tailed bandits.<n>We propose a emphone-pass algorithm based on the online mirror descent framework.
arXiv Detail & Related papers (2025-03-01T09:41:45Z) - Coresets for Multiple $\ell_p$ Regression [47.790126028106734]
We construct coresets of size $tilde O(varepsilon-2d)$ for $p2$ and $tilde O(varepsilon-pdp/2)$ for $p>2$.
For $1p2$, every matrix has a subset of $tilde O(varepsilon-1k)$ rows which spans a $(varepsilon-1k)$-approximately optimal $k$-dimensional subspace for $ell_p$ subspace approximation
arXiv Detail & Related papers (2024-06-04T15:50:42Z) - Low Overhead Qutrit Magic State Distillation [0.0]
We show that using qutrits rather than qubits leads to a substantial reduction in the overhead cost associated with fault-tolerant quantum computing.<n>We construct a family of $[[9m-k, k, 2]]_3$ triorthogonal qutrit error-correcting codes for any positive $m$ and $k$ with $k leq 3m-2 integers.
arXiv Detail & Related papers (2024-03-10T14:56:07Z) - A spectral least-squares-type method for heavy-tailed corrupted
regression with unknown covariance \& heterogeneous noise [2.019622939313173]
We revisit heavy-tailed corrupted least-squares linear regression assuming to have a corrupted $n$-sized label-feature sample of at most $epsilon n$ arbitrary outliers.
We propose a near-optimal computationally tractable estimator, based on the power method, assuming no knowledge on $(Sigma,Xi) nor the operator norm of $Xi$.
arXiv Detail & Related papers (2022-09-06T23:37:31Z) - Low-Rank Approximation with $1/\epsilon^{1/3}$ Matrix-Vector Products [58.05771390012827]
We study iterative methods based on Krylov subspaces for low-rank approximation under any Schatten-$p$ norm.
Our main result is an algorithm that uses only $tildeO(k/sqrtepsilon)$ matrix-vector products.
arXiv Detail & Related papers (2022-02-10T16:10:41Z) - Optimal SQ Lower Bounds for Learning Halfspaces with Massart Noise [9.378684220920562]
tightest statistical query (SQ) lower bounds for learnining halfspaces in the presence of Massart noise.
We show that for arbitrary $eta in [0,1/2]$ every SQ algorithm achieving misclassification error better than $eta$ requires queries of superpolynomial accuracy.
arXiv Detail & Related papers (2022-01-24T17:33:19Z) - Nearly Horizon-Free Offline Reinforcement Learning [97.36751930393245]
We revisit offline reinforcement learning on episodic time-homogeneous Markov Decision Processes with $S$ states, $A$ actions and planning horizon $H$.
We obtain the first set of nearly $H$-free sample complexity bounds for evaluation and planning using the empirical MDPs.
arXiv Detail & Related papers (2021-03-25T18:52:17Z) - Deep Neural Networks with ReLU-Sine-Exponential Activations Break Curse
of Dimensionality on H\"older Class [6.476766717110237]
We construct neural networks with ReLU, sine and $2x$ as activation functions.
In addition to its supper expressive power, functions implemented by ReLU-sine-$2x$ networks are (generalized) differentiable.
arXiv Detail & Related papers (2021-02-28T15:57:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.