Lifting a CSS code via its handlebody realization
- URL: http://arxiv.org/abs/2505.14327v1
- Date: Tue, 20 May 2025 13:11:53 GMT
- Title: Lifting a CSS code via its handlebody realization
- Authors: Virgile Guemard,
- Abstract summary: We present a topological approach to lifting a quantum CSS code.<n>We show how the handlebody realization of a code can also be used to perform code lifting.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a topological approach to lifting a quantum CSS code. In previous work, we proposed lifting a CSS code by constructing covering spaces over its 2D simplicial complex representation, known as the Tanner cone-complex. This idea was inspired by the work of Freedman and Hastings, which associates CSS codes with handlebodies. In this paper, we show how the handlebody realization of a code can also be used to perform code lifting, and we provide a more detailed discussion of why this is essentially equivalent to the Tanner cone-complex approach. As an application, we classify lifts of hypergraph-product codes via their handlebody realization.
Related papers
- Paper2Code: Automating Code Generation from Scientific Papers in Machine Learning [57.09163579304332]
We introduce PaperCoder, a framework that transforms machine learning papers into functional code repositories.<n>PaperCoder operates in three stages: planning, designs the system architecture with diagrams, identifies file dependencies, and generates configuration files.<n>We then evaluate PaperCoder on generating code implementations from machine learning papers based on both model-based and human evaluations.
arXiv Detail & Related papers (2025-04-24T01:57:01Z) - Asymptotically good CSS-T codes exist [0.0]
We give a new construction of binary quantum codes that enables the generation of a CSS-T code from any given CSS code.<n>We show that the same result holds for binary quantum low-density parity check CSS-T codes.
arXiv Detail & Related papers (2024-12-11T18:03:58Z) - Lifts of quantum CSS codes [0.0]
We propose a notion of lift for quantum CSS codes, inspired by the geometrical construction of Freedman and Hastings.
It is based on the existence of a canonical complex associated to any CSS code, that we introduce under the name of Tanner cone-complex.
arXiv Detail & Related papers (2024-04-25T16:44:45Z) - Comments as Natural Logic Pivots: Improve Code Generation via Comment Perspective [85.48043537327258]
We propose MANGO (comMents As Natural loGic pivOts), including a comment contrastive training strategy and a corresponding logical comment decoding strategy.
Results indicate that MANGO significantly improves the code pass rate based on the strong baselines.
The robustness of the logical comment decoding strategy is notably higher than the Chain-of-thoughts prompting.
arXiv Detail & Related papers (2024-04-11T08:30:46Z) - Subsystem CSS codes, a tighter stabilizer-to-CSS mapping, and Goursat's Lemma [0.5461938536945721]
We develop a Steane-type decoder using only data from the two underlying classical codes.
We show that any subsystem stabilizer code can be "doubled" to yield a subsystem CSS code with twice the number of physical, logical, and gauge qudits and up to twice the code distance.
arXiv Detail & Related papers (2023-11-29T19:00:04Z) - Graphical CSS Code Transformation Using ZX Calculus [0.6734802552703861]
We present a generic approach to transform CSS codes by building upon their equivalence to phase-free ZX diagrams.
We show how ZX and graphical encoder maps relate several equivalent perspectives on these code-transforming operations.
arXiv Detail & Related papers (2023-07-05T17:04:49Z) - CSS code surgery as a universal construction [51.63482609748332]
We define code maps between Calderbank-Shor-Steane (CSS) codes using maps between chain complexes.
We describe code surgery between such codes using a specific colimit in the category of chain complexes.
arXiv Detail & Related papers (2023-01-31T16:17:25Z) - Soft-Labeled Contrastive Pre-training for Function-level Code
Representation [127.71430696347174]
We present textbfSCodeR, a textbfSoft-labeled contrastive pre-training framework with two positive sample construction methods.
Considering the relevance between codes in a large-scale code corpus, the soft-labeled contrastive pre-training can obtain fine-grained soft-labels.
SCodeR achieves new state-of-the-art performance on four code-related tasks over seven datasets.
arXiv Detail & Related papers (2022-10-18T05:17:37Z) - CodeRetriever: Unimodal and Bimodal Contrastive Learning [128.06072658302165]
We propose the CodeRetriever model, which combines the unimodal and bimodal contrastive learning to train function-level code semantic representations.
For unimodal contrastive learning, we design a semantic-guided method to build positive code pairs based on the documentation and function name.
For bimodal contrastive learning, we leverage the documentation and in-line comments of code to build text-code pairs.
arXiv Detail & Related papers (2022-01-26T10:54:30Z) - COSEA: Convolutional Code Search with Layer-wise Attention [90.35777733464354]
We propose a new deep learning architecture, COSEA, which leverages convolutional neural networks with layer-wise attention to capture the code's intrinsic structural logic.
COSEA can achieve significant improvements over state-of-the-art methods on code search tasks.
arXiv Detail & Related papers (2020-10-19T13:53:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.