Verified invertible lexer using regular expressions and DFAs
- URL: http://arxiv.org/abs/2412.13581v1
- Date: Wed, 18 Dec 2024 08:03:17 GMT
- Title: Verified invertible lexer using regular expressions and DFAs
- Authors: Samuel Chassot, Viktor KunĨak,
- Abstract summary: In this project, we explore the concept of invertibility applied to serialisation and lexing frameworks.<n>While serialisation is the process of taking a data structure and writing it to a bit array, lexing is the process of reading a stream of characters and splitting them into tokens.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this project, we explore the concept of invertibility applied to serialisation and lexing frameworks. Recall that, on one hand, serialisation is the process of taking a data structure and writing it to a bit array while parsing is the reverse operation, i.e., reading the bit array and constructing the data structure back. While lexing, on the other hand, is the process of reading a stream of characters and splitting them into tokens, by following a list of given rules. While used in different applications, both are similar in their abstract operation: they both take a list of simple characters and extract a more complex structure. Applications in which these two operations are used are different but they share a need for the invertibility of the process. For example, when tokenising a code file that was prettyprinted by a compiler, one would expect to get the same sequence of tokens. Similarly, when a spacecraft sends scientific data to the ground, one would expect the parsed data to be the same as the one serialised by the spacecraft. The idea of this project is to explore the idea of having a framework capable of generating parser/serialiser or lexer/prettyprinter pairs with a formally verified notion of invertibility. We first explore related works and frameworks. After that, we present our verified lexer framework developed in Scala and verified using the Stainless framework1. We explain the implementation choices we make and present the specifications and their proofs. The code of the lexer with the proofs is available on Github2. The main branch contains the regular expression (called regex from now on) matcher version and the verified Computable Languages while the dfa match branch contains the version using the DFA matcher.
Related papers
- StrTune: Data Dependence-based Code Slicing for Binary Similarity Detection with Fine-tuned Representation [5.41477941455399]
BCSD can address binary tasks such as malicious code snippets identification and binary patch analysis by comparing code patterns.
Because binaries are compiled with different compilation configurations, existing approaches still face notable limitations when comparing binary similarity.
We propose StrTune, which slices binary code based on data dependence and perform slice-level fine-tuning.
arXiv Detail & Related papers (2024-11-19T12:20:08Z) - LILO: Learning Interpretable Libraries by Compressing and Documenting Code [71.55208585024198]
We introduce LILO, a neurosymbolic framework that iteratively synthesizes, compresses, and documents code.
LILO combines LLM-guided program synthesis with recent algorithmic advances in automated from Stitch.
We find that AutoDoc boosts performance by helping LILO's synthesizer to interpret and deploy learned abstractions.
arXiv Detail & Related papers (2023-10-30T17:55:02Z) - Lexinvariant Language Models [84.2829117441298]
Token embeddings, a mapping from discrete lexical symbols to continuous vectors, are at the heart of any language model (LM)
We study textitlexinvariantlanguage models that are invariant to lexical symbols and therefore do not need fixed token embeddings in practice.
We show that a lexinvariant LM can attain perplexity comparable to that of a standard language model, given a sufficiently long context.
arXiv Detail & Related papers (2023-05-24T19:10:46Z) - Linear-Time Modeling of Linguistic Structure: An Order-Theoretic
Perspective [97.57162770792182]
Tasks that model the relation between pairs of tokens in a string are a vital part of understanding natural language.
We show that these exhaustive comparisons can be avoided, and, moreover, the complexity can be reduced to linear by casting the relation between tokens as a partial order over the string.
Our method predicts real numbers for each token in a string in parallel and sorts the tokens accordingly, resulting in total orders of the tokens in the string.
arXiv Detail & Related papers (2023-05-24T11:47:35Z) - Outline, Then Details: Syntactically Guided Coarse-To-Fine Code
Generation [61.50286000143233]
ChainCoder is a program synthesis language model that generates Python code progressively.
A tailored transformer architecture is leveraged to jointly encode the natural language descriptions and syntactically aligned I/O data samples.
arXiv Detail & Related papers (2023-04-28T01:47:09Z) - What Are You Token About? Dense Retrieval as Distributions Over the
Vocabulary [68.77983831618685]
We propose to interpret the vector representations produced by dual encoders by projecting them into the model's vocabulary space.
We show that the resulting projections contain rich semantic information, and draw connection between them and sparse retrieval.
arXiv Detail & Related papers (2022-12-20T16:03:25Z) - Forming Trees with Treeformers [3.8073142980733]
Many state-of-the-art neural networks models such as Transformers have no explicit hierarchical structure in its architecture.
We introduce Treeformer, a general-purpose encoder module inspired by the CKY algorithm.
Our experiments demonstrate the benefits of incorporating hierarchical structure into the Transformer.
arXiv Detail & Related papers (2022-07-14T14:39:30Z) - InCoder: A Generative Model for Code Infilling and Synthesis [88.46061996766348]
We introduce InCoder, a unified generative model that can perform program synthesis (via left-to-right generation) and editing (via infilling)
InCoder is trained to generate code files from a large corpus of permissively licensed code.
Our model is the first generative model that is able to directly perform zero-shot code infilling.
arXiv Detail & Related papers (2022-04-12T16:25:26Z) - Learning to Look Inside: Augmenting Token-Based Encoders with
Character-Level Information [29.633735942273997]
XRayEmb is a method for retrofitting existing token-based models with character-level information.
We show that incorporating XRayEmb's learned vectors into sequences of pre-trained token embeddings helps performance on both autoregressive and masked pre-trained transformer architectures.
arXiv Detail & Related papers (2021-08-01T08:09:26Z) - Assessing the Effectiveness of Syntactic Structure to Learn Code Edit
Representations [2.1793134762413433]
We use structural information from Abstract Syntax Tree (AST) to represent source code edits.
Inspired by the code2seq approach, we evaluate how using structural information from AST can help with the task of code edit classification.
arXiv Detail & Related papers (2021-06-11T01:23:07Z) - A Unifying Theory of Transition-based and Sequence Labeling Parsing [14.653008985229617]
We map transition-based parsing algorithms that read sentences from left to right to sequence labeling encodings of syntactic trees.
This establishes a theoretical relation between transition-based parsing and sequence-labeling parsing.
We implement sequence labeling versions of four algorithms, showing that they are learnable and obtain comparable performance to existing encodings.
arXiv Detail & Related papers (2020-11-01T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.