Verified Reversible Programming for Verified Lossless Compression
- URL: http://arxiv.org/abs/2211.09676v1
- Date: Wed, 2 Nov 2022 16:39:41 GMT
- Title: Verified Reversible Programming for Verified Lossless Compression
- Authors: James Townsend and Jan-Willem van de Meent
- Abstract summary: Lossless compression implementations typically contain two programs, an encoder and a decoder, which are required to be inverse to one another.
We observe that a significant class of compression methods, based on asymmetric numeral systems (ANS), have shared structure between the encoder and decoder.
We have implemented a small reversible language, embedded in Agda, which we call 'Flipper'
- Score: 11.020543186794459
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Lossless compression implementations typically contain two programs, an
encoder and a decoder, which are required to be inverse to one another.
Maintaining consistency between two such programs during development requires
care, and incorrect data decoding can be costly and difficult to debug. We
observe that a significant class of compression methods, based on asymmetric
numeral systems (ANS), have shared structure between the encoder and decoder --
the decoder program is the 'reverse' of the encoder program -- allowing both to
be simultaneously specified by a single, reversible, 'codec' function. To
exploit this, we have implemented a small reversible language, embedded in
Agda, which we call 'Flipper'. Agda supports formal verification of program
properties, and the compiler for our reversible language (which is implemented
as an Agda macro), produces not just an encoder/decoder pair of functions but
also a proof that they are inverse to one another. Thus users of the language
get formal verification 'for free'. We give a small example use-case of Flipper
in this paper, and plan to publish a full compression implementation soon.
Related papers
- Generalizing the matching decoder for the Chamon code [1.8416014644193066]
We implement a matching decoder for a three-dimensional, non-CSS, low-density parity check code known as the Chamon code.
We find that a generalized matching decoder that is augmented by a belief-propagation step prior to matching gives a threshold of 10.5% for depolarising noise.
arXiv Detail & Related papers (2024-11-05T19:00:12Z) - Collective Bit Flipping-Based Decoding of Quantum LDPC Codes [0.6554326244334866]
We improve both the error correction performance and decoding latency of variable degree-3 (dv-3) QLDPC codes under iterative decoding.
Our decoding scheme is based on applying a modified version of bit flipping (BF) decoding, namely two-bit bit flipping (TBF) decoding.
arXiv Detail & Related papers (2024-06-24T18:51:48Z) - Progressive-Proximity Bit-Flipping for Decoding Surface Codes [8.971989179518214]
Topological quantum codes, such as toric and surface codes, are excellent candidates for hardware implementation.
Existing decoders often fall short of meeting requirements such as having low computational complexity.
We propose a novel bit-flipping (BF) decoder tailored for toric and surface codes.
arXiv Detail & Related papers (2024-02-24T22:38:05Z) - Triple-Encoders: Representations That Fire Together, Wire Together [51.15206713482718]
Contrastive Learning is a representation learning method that encodes relative distances between utterances into the embedding space via a bi-encoder.
This study introduces triple-encoders, which efficiently compute distributed utterance mixtures from these independently encoded utterances.
We find that triple-encoders lead to a substantial improvement over bi-encoders, and even to better zero-shot generalization than single-vector representation models.
arXiv Detail & Related papers (2024-02-19T18:06:02Z) - SparseCoder: Identifier-Aware Sparse Transformer for File-Level Code
Summarization [51.67317895094664]
This paper studies file-level code summarization, which can assist programmers in understanding and maintaining large source code projects.
We propose SparseCoder, an identifier-aware sparse transformer for effectively handling long code sequences.
arXiv Detail & Related papers (2024-01-26T09:23:27Z) - Guess & Sketch: Language Model Guided Transpilation [59.02147255276078]
Learned transpilation offers an alternative to manual re-writing and engineering efforts.
Probabilistic neural language models (LMs) produce plausible outputs for every input, but do so at the cost of guaranteed correctness.
Guess & Sketch extracts alignment and confidence information from features of the LM then passes it to a symbolic solver to resolve semantic equivalence.
arXiv Detail & Related papers (2023-09-25T15:42:18Z) - Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired
Speech Data [145.95460945321253]
We introduce two pre-training tasks for the encoder-decoder network using acoustic units, i.e., pseudo codes.
The proposed Speech2C can relatively reduce the word error rate (WER) by 19.2% over the method without decoder pre-training.
arXiv Detail & Related papers (2022-03-31T15:33:56Z) - Trans-Encoder: Unsupervised sentence-pair modelling through self- and
mutual-distillations [22.40667024030858]
Bi-encoders produce fixed-dimensional sentence representations and are computationally efficient.
Cross-encoders can leverage their attention heads to exploit inter-sentence interactions for better performance.
Trans-Encoder combines the two learning paradigms into an iterative joint framework to simultaneously learn enhanced bi- and cross-encoders.
arXiv Detail & Related papers (2021-09-27T14:06:47Z) - Dense Coding with Locality Restriction for Decoder: Quantum Encoders vs.
Super-Quantum Encoders [67.12391801199688]
We investigate dense coding by imposing various locality restrictions to our decoder.
In this task, the sender Alice and the receiver Bob share an entangled state.
arXiv Detail & Related papers (2021-09-26T07:29:54Z) - On Sparsifying Encoder Outputs in Sequence-to-Sequence Models [90.58793284654692]
We take Transformer as the testbed and introduce a layer of gates in-between the encoder and the decoder.
The gates are regularized using the expected value of the sparsity-inducing L0penalty.
We investigate the effects of this sparsification on two machine translation and two summarization tasks.
arXiv Detail & Related papers (2020-04-24T16:57:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.