Transduce: learning transduction grammars for string transformation
- URL: http://arxiv.org/abs/2401.09426v1
- Date: Thu, 14 Dec 2023 07:59:02 GMT
- Title: Transduce: learning transduction grammars for string transformation
- Authors: Francis Frydman, Philippe Mangion
- Abstract summary: A new algorithm, Transduce, is proposed to learn positional transformations efficiently from one or two positive examples without inductive bias.
We experimentally demonstrate that Transduce can learn positional transformations efficiently from one or two positive examples without inductive bias, achieving a success rate higher than the current state of the art.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The synthesis of string transformation programs from input-output examples
utilizes various techniques, all based on an inductive bias that comprises a
restricted set of basic operators to be combined. A new algorithm, Transduce,
is proposed, which is founded on the construction of abstract transduction
grammars and their generalization. We experimentally demonstrate that Transduce
can learn positional transformations efficiently from one or two positive
examples without inductive bias, achieving a success rate higher than the
current state of the art.
Related papers
- Strengthening Structural Inductive Biases by Pre-training to Perform Syntactic Transformations [75.14793516745374]
We propose to strengthen the structural inductive bias of a Transformer by intermediate pre-training.
Our experiments confirm that this helps with few-shot learning of syntactic tasks such as chunking.
Our analysis shows that the intermediate pre-training leads to attention heads that keep track of which syntactic transformation needs to be applied to which token.
arXiv Detail & Related papers (2024-07-05T14:29:44Z) - On the Expressive Power of a Variant of the Looped Transformer [83.30272757948829]
We design a novel transformer block, dubbed AlgoFormer, to empower transformers with algorithmic capabilities.
The proposed AlgoFormer can achieve significantly higher in algorithm representation when using the same number of parameters.
Some theoretical and empirical results are presented to show that the designed transformer has the potential to be smarter than human-designed algorithms.
arXiv Detail & Related papers (2024-02-21T07:07:54Z) - How Do Transformers Learn In-Context Beyond Simple Functions? A Case
Study on Learning with Representations [98.7450564309923]
This paper takes initial steps on understanding in-context learning (ICL) in more complex scenarios, by studying learning with representations.
We construct synthetic in-context learning problems with a compositional structure, where the label depends on the input through a possibly complex but fixed representation function.
We show theoretically the existence of transformers that approximately implement such algorithms with mild depth and size.
arXiv Detail & Related papers (2023-10-16T17:40:49Z) - Transformers as Statisticians: Provable In-Context Learning with
In-Context Algorithm Selection [88.23337313766353]
This work first provides a comprehensive statistical theory for transformers to perform ICL.
We show that transformers can implement a broad class of standard machine learning algorithms in context.
A emphsingle transformer can adaptively select different base ICL algorithms.
arXiv Detail & Related papers (2023-06-07T17:59:31Z) - Fine-Tuning Transformers: Vocabulary Transfer [0.30586855806896046]
Transformers are responsible for the vast majority of recent advances in natural language processing.
This paper studies if corpus-specific tokenization used for fine-tuning improves the resulting performance of the model.
arXiv Detail & Related papers (2021-12-29T14:22:42Z) - Glushkov's construction for functional subsequential transducers [91.3755431537592]
Glushkov's construction has many interesting properties and they become even more evident when applied to transducers.
Special flavour of regular expressions is introduced, which can be efficiently converted to $epsilon$-free functional subsequential weighted finite state transducers.
arXiv Detail & Related papers (2020-08-05T17:09:58Z) - I-BERT: Inductive Generalization of Transformer to Arbitrary Context
Lengths [2.604653544948958]
Self-attention has emerged as a vital component of state-of-the-art sequence-to-sequence models for natural language processing.
We propose I-BERT, a bi-directional Transformer that replaces positional encodings with a recurrent layer.
arXiv Detail & Related papers (2020-06-18T00:56:12Z) - Guiding Symbolic Natural Language Grammar Induction via
Transformer-Based Sequence Probabilities [0.0]
A novel approach to automated learning of syntactic rules governing natural languages is proposed.
This method exploits the learned linguistic knowledge in transformers, without any reference to their inner representations.
We show a proof-of-concept example of our proposed technique, using it to guide unsupervised symbolic link-grammar induction methods.
arXiv Detail & Related papers (2020-05-26T06:18:47Z) - Applying the Transformer to Character-level Transduction [68.91664610425114]
The transformer has been shown to outperform recurrent neural network-based sequence-to-sequence models in various word-level NLP tasks.
We show that with a large enough batch size, the transformer does indeed outperform recurrent models for character-level tasks.
arXiv Detail & Related papers (2020-05-20T17:25:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.