Transduce: learning transduction grammars for string transformation
- URL: http://arxiv.org/abs/2401.09426v1
- Date: Thu, 14 Dec 2023 07:59:02 GMT
- Title: Transduce: learning transduction grammars for string transformation
- Authors: Francis Frydman, Philippe Mangion
- Abstract summary: A new algorithm, Transduce, is proposed to learn positional transformations efficiently from one or two positive examples without inductive bias.
We experimentally demonstrate that Transduce can learn positional transformations efficiently from one or two positive examples without inductive bias, achieving a success rate higher than the current state of the art.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The synthesis of string transformation programs from input-output examples
utilizes various techniques, all based on an inductive bias that comprises a
restricted set of basic operators to be combined. A new algorithm, Transduce,
is proposed, which is founded on the construction of abstract transduction
grammars and their generalization. We experimentally demonstrate that Transduce
can learn positional transformations efficiently from one or two positive
examples without inductive bias, achieving a success rate higher than the
current state of the art.
Related papers
- Enhancing Transformers for Generalizable First-Order Logical Entailment [51.04944136538266]
This paper investigates the generalizable first-order logical reasoning ability of transformers with their parameterized knowledge.
The first-order reasoning capability of transformers is assessed through their ability to perform first-order logical entailment.
We propose a more sophisticated, logic-aware architecture, TEGA, to enhance the capability for generalizable first-order logical entailment in transformers.
arXiv Detail & Related papers (2025-01-01T07:05:32Z) - Strengthening Structural Inductive Biases by Pre-training to Perform Syntactic Transformations [75.14793516745374]
We propose to strengthen the structural inductive bias of a Transformer by intermediate pre-training.
Our experiments confirm that this helps with few-shot learning of syntactic tasks such as chunking.
Our analysis shows that the intermediate pre-training leads to attention heads that keep track of which syntactic transformation needs to be applied to which token.
arXiv Detail & Related papers (2024-07-05T14:29:44Z) - AlgoFormer: An Efficient Transformer Framework with Algorithmic Structures [80.28359222380733]
We design a novel transformer framework, dubbed AlgoFormer, to empower transformers with algorithmic capabilities.
In particular, inspired by the structure of human-designed learning algorithms, our transformer framework consists of a pre-transformer that is responsible for task preprocessing.
Some theoretical and empirical results are presented to show that the designed transformer has the potential to perform algorithm representation and learning.
arXiv Detail & Related papers (2024-02-21T07:07:54Z) - Transformers as Statisticians: Provable In-Context Learning with
In-Context Algorithm Selection [88.23337313766353]
This work first provides a comprehensive statistical theory for transformers to perform ICL.
We show that transformers can implement a broad class of standard machine learning algorithms in context.
A emphsingle transformer can adaptively select different base ICL algorithms.
arXiv Detail & Related papers (2023-06-07T17:59:31Z) - Fine-Tuning Transformers: Vocabulary Transfer [0.30586855806896046]
Transformers are responsible for the vast majority of recent advances in natural language processing.
This paper studies if corpus-specific tokenization used for fine-tuning improves the resulting performance of the model.
arXiv Detail & Related papers (2021-12-29T14:22:42Z) - Glushkov's construction for functional subsequential transducers [91.3755431537592]
Glushkov's construction has many interesting properties and they become even more evident when applied to transducers.
Special flavour of regular expressions is introduced, which can be efficiently converted to $epsilon$-free functional subsequential weighted finite state transducers.
arXiv Detail & Related papers (2020-08-05T17:09:58Z) - I-BERT: Inductive Generalization of Transformer to Arbitrary Context
Lengths [2.604653544948958]
Self-attention has emerged as a vital component of state-of-the-art sequence-to-sequence models for natural language processing.
We propose I-BERT, a bi-directional Transformer that replaces positional encodings with a recurrent layer.
arXiv Detail & Related papers (2020-06-18T00:56:12Z) - Guiding Symbolic Natural Language Grammar Induction via
Transformer-Based Sequence Probabilities [0.0]
A novel approach to automated learning of syntactic rules governing natural languages is proposed.
This method exploits the learned linguistic knowledge in transformers, without any reference to their inner representations.
We show a proof-of-concept example of our proposed technique, using it to guide unsupervised symbolic link-grammar induction methods.
arXiv Detail & Related papers (2020-05-26T06:18:47Z) - Applying the Transformer to Character-level Transduction [68.91664610425114]
The transformer has been shown to outperform recurrent neural network-based sequence-to-sequence models in various word-level NLP tasks.
We show that with a large enough batch size, the transformer does indeed outperform recurrent models for character-level tasks.
arXiv Detail & Related papers (2020-05-20T17:25:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.