Glushkov's construction for functional subsequential transducers
- URL: http://arxiv.org/abs/2008.02239v4
- Date: Tue, 22 Sep 2020 13:36:57 GMT
- Title: Glushkov's construction for functional subsequential transducers
- Authors: Aleksander Mendoza-Drosik
- Abstract summary: Glushkov's construction has many interesting properties and they become even more evident when applied to transducers.
Special flavour of regular expressions is introduced, which can be efficiently converted to $epsilon$-free functional subsequential weighted finite state transducers.
- Score: 91.3755431537592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Glushkov's construction has many interesting properties and they become even
more evident when applied to transducers. This article strives to show the wast
range of possible extensions and optimisations for this algorithm. Special
flavour of regular expressions is introduced, which can be efficiently
converted to $\epsilon$-free functional subsequential weighted finite state
transducers. Produced automata are very compact, as they contain only one state
for each symbol (from input alphabet) of original expression and only one
transition for each range of symbols, no matter how large. Such compactified
ranges of transitions allow for efficient binary search lookup during automaton
evaluation. All the methods and algorithms presented here were used to
implement open-source compiler of regular expressions for multitape
transducers.
Related papers
- Label-Looping: Highly Efficient Decoding for Transducers [19.091932566833265]
This paper introduces a highly efficient greedy decoding algorithm for Transducer-based speech recognition models.
Experiments show that the label-looping algorithm is up to 2.0X faster than conventional batched decoding when using batch size 32.
arXiv Detail & Related papers (2024-06-10T12:34:38Z) - Transformers as Transducers [27.48483887144685]
We study the sequence-to-sequence mapping capacity of transformers by relating them to finite transducers.
We extend the existing Boolean variant B-RASP to sequence-to-sequence functions and show that it computes exactly the first-order rational functions.
We show that masked average-hard attention transformers can simulate S-RASP.
arXiv Detail & Related papers (2024-04-02T15:34:47Z) - Transduce: learning transduction grammars for string transformation [0.0]
A new algorithm, Transduce, is proposed to learn positional transformations efficiently from one or two positive examples without inductive bias.
We experimentally demonstrate that Transduce can learn positional transformations efficiently from one or two positive examples without inductive bias, achieving a success rate higher than the current state of the art.
arXiv Detail & Related papers (2023-12-14T07:59:02Z) - Accelerated Discovery of Machine-Learned Symmetries: Deriving the
Exceptional Lie Groups G2, F4 and E6 [55.41644538483948]
This letter introduces two improved algorithms that significantly speed up the discovery of symmetry transformations.
Given the significant complexity of the exceptional Lie groups, our results demonstrate that this machine-learning method for discovering symmetries is completely general and can be applied to a wide variety of labeled datasets.
arXiv Detail & Related papers (2023-07-10T20:25:44Z) - Performance Embeddings: A Similarity-based Approach to Automatic
Performance Optimization [71.69092462147292]
Performance embeddings enable knowledge transfer of performance tuning between applications.
We demonstrate this transfer tuning approach on case studies in deep neural networks, dense and sparse linear algebra compositions, and numerical weather prediction stencils.
arXiv Detail & Related papers (2023-03-14T15:51:35Z) - On efficient quantum block encoding of pseudo-differential operators [6.134067544403308]
Block encoding lies at the core of many existing quantum algorithms.
This paper presents a study of the block encoding of a rich family of dense operators: the pseudo-differential operators (PDOs)
arXiv Detail & Related papers (2023-01-21T07:18:57Z) - Composable Text Controls in Latent Space with ODEs [97.12426987887021]
This paper proposes a new efficient approach for composable text operations in the compact latent space of text.
By connecting pretrained LMs to the latent space through efficient adaption, we then decode the sampled vectors into desired text sequences.
Experiments show that composing those operators within our approach manages to generate or edit high-quality text.
arXiv Detail & Related papers (2022-08-01T06:51:45Z) - Supervised Quantile Normalization for Low-rank Matrix Approximation [50.445371939523305]
We learn the parameters of quantile normalization operators that can operate row-wise on the values of $X$ and/or of its factorization $UV$ to improve the quality of the low-rank representation of $X$ itself.
We demonstrate the applicability of these techniques on synthetic and genomics datasets.
arXiv Detail & Related papers (2020-02-08T21:06:02Z) - Multi-level Head-wise Match and Aggregation in Transformer for Textual
Sequence Matching [87.97265483696613]
We propose a new approach to sequence pair matching with Transformer, by learning head-wise matching representations on multiple levels.
Experiments show that our proposed approach can achieve new state-of-the-art performance on multiple tasks.
arXiv Detail & Related papers (2020-01-20T20:02:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.