Convergence and Diversity in the Control Hierarchy
- URL: http://arxiv.org/abs/2306.03628v1
- Date: Tue, 6 Jun 2023 12:30:29 GMT
- Title: Convergence and Diversity in the Control Hierarchy
- Authors: Alexandra Butoi, Ryan Cotterell, David Chiang
- Abstract summary: We show that four formalisms are not only weakly equivalent but equivalent in a stricter sense that we call d-weak equivalence.
We make precise the intuition that a CFG controlling a CFG is a TAG, a PDA controlling a PDA is an embedded PDA, and a PDA controlling a CFG is a LIG.
The fourth member of this family, a CFG controlling a PDA, does not correspond to any formalism we know of, so we invent one and call it a Pushdown Adjoining Automaton.
- Score: 134.09048604473793
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Weir has defined a hierarchy of language classes whose second member
($\mathcal{L}_2$) is generated by tree-adjoining grammars (TAG), linear indexed
grammars (LIG), combinatory categorial grammars, and head grammars. The
hierarchy is obtained using the mechanism of control, and $\mathcal{L}_2$ is
obtained using a context-free grammar (CFG) whose derivations are controlled by
another CFG. We adapt Weir's definition of a controllable CFG to give a
definition of controllable pushdown automata (PDAs). This yields three new
characterizations of $\mathcal{L}_2$ as the class of languages generated by
PDAs controlling PDAs, PDAs controlling CFGs, and CFGs controlling PDAs. We
show that these four formalisms are not only weakly equivalent but equivalent
in a stricter sense that we call d-weak equivalence. Furthermore, using an even
stricter notion of equivalence called d-strong equivalence, we make precise the
intuition that a CFG controlling a CFG is a TAG, a PDA controlling a PDA is an
embedded PDA, and a PDA controlling a CFG is a LIG. The fourth member of this
family, a CFG controlling a PDA, does not correspond to any formalism we know
of, so we invent one and call it a Pushdown Adjoining Automaton.
Related papers
- Annotating Control-Flow Graphs for Formalized Test Coverage Criteria [0.3277163122167433]
We show how to annotate a Control-Flow Graph with decision information inferred from the graph itself.
We have implemented our algorithms in a tool which we show can be applied to automatically annotate CFGs output from popular compilers.
arXiv Detail & Related papers (2024-07-04T20:13:03Z) - Incoherent GRAPE (inGRAPE) for optimization of quantum systems with environmentally assisted control [51.3422222472898]
We discuss applications of incoherent GRAPE method to high fidelity gate generation for open one- and two-qubit systems.
For a qutrit, a formulation of the environment-assisted incoherent control with time-dependent decoherence rates is provided.
arXiv Detail & Related papers (2024-03-26T05:13:26Z) - Efficient Algorithms for Recognizing Weighted Tree-Adjoining Languages [104.90415092306219]
Four formalisms are equivalent to tree-adjoining grammars (TAG), linear indexed grammars (LIG), pushdown-adjoining automata (PAA) and embedded pushdown automata (EPDA)
We design new algorithms for computing their stringsum derivations (the weight of all automatons of a string) and allsums (the weight of all derivations)
For EPDA, our algorithm is both more space-efficient and time-efficient than the algorithm of Alonso et al. (2001) by factors of $mathcalO(|Gamma|2)$ and $
arXiv Detail & Related papers (2023-10-23T18:26:00Z) - Stay on topic with Classifier-Free Guidance [57.28934343207042]
We show that CFG can be used broadly as an inference-time technique in pure language modeling.
We show that CFG improves the performance of Pythia, GPT-2 and LLaMA-family models across an array of tasks.
arXiv Detail & Related papers (2023-06-30T17:07:02Z) - Grammar-Constrained Decoding for Structured NLP Tasks without Finetuning [27.59524153097858]
grammar-constrained decoding (GCD) can be used to control the generation of large language models (LMs)
GCD can serve as a unified framework for structured NLP tasks in general.
We show that grammar-constrained LMs substantially outperform unconstrained LMs or even beat task-specific finetuned models.
arXiv Detail & Related papers (2023-05-23T11:54:37Z) - Physics of Language Models: Part 1, Learning Hierarchical Language Structures [51.68385617116854]
Transformer-based language models are effective but complex, and understanding their inner workings is a significant challenge.
We introduce a family of synthetic CFGs that produce hierarchical rules, capable of generating lengthy sentences.
We demonstrate that generative models like GPT can accurately learn this CFG language and generate sentences based on it.
arXiv Detail & Related papers (2023-05-23T04:28:16Z) - CODEP: Grammatical Seq2Seq Model for General-Purpose Code Generation [13.702504014245713]
General-purpose code generation aims to automatically convert the natural language (NL) description to code snippets in a general-purpose programming language (GPL) like Python.
Existing sequence-to-sequence (Seq2Seq) approaches generate the code neglecting the grammar rules.
We propose CODEP, a grammatical Seq2Seq code generation framework equipped with a Pushdown automaton (PDA) module.
arXiv Detail & Related papers (2022-11-02T01:40:18Z) - Algorithms for Weighted Pushdown Automata [118.67634716230025]
Weighted pushdown automata (WPDAs) are at the core of many natural language processing tasks.
We develop novel algorithms that operate directly on WPDAs.
arXiv Detail & Related papers (2022-10-13T10:21:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.