Automata Cascades: Expressivity and Sample Complexity
- URL: http://arxiv.org/abs/2211.14028v1
- Date: Fri, 25 Nov 2022 11:02:33 GMT
- Title: Automata Cascades: Expressivity and Sample Complexity
- Authors: Alessandro Ronca, Nadezda A. Knorozova, Giuseppe De Giacomo
- Abstract summary: We show that cascades allow for describing the sample complexity of automata in terms of their components.
Our results show that one can in principle learn automata with infinite input alphabets and a number of states that is exponential in the amount of data available.
- Score: 90.53326983143644
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Every automaton can be decomposed into a cascade of basic automata. This is
the Prime Decomposition Theorem by Krohn and Rhodes. We show that cascades
allow for describing the sample complexity of automata in terms of their
components. In particular, we show that the sample complexity is linear in the
number of components and the maximum complexity of a single component. This
opens to the possibility for learning automata representing large dynamic
systems consisting of many parts interacting with each other. It is in sharp
contrast with the established understanding of the sample complexity of
automata, described in terms of the overall number of states and input letters,
which in turn implies that it is only possible to learn automata where the
number of states and letters is linear in the amount of data available. Instead
our results show that one can in principle learn automata with infinite input
alphabets and a number of states that is exponential in the amount of data
available.
Related papers
- Counting Reward Automata: Sample Efficient Reinforcement Learning
Through the Exploitation of Reward Function Structure [13.231546105751015]
We present counting reward automata-a finite state machine variant capable of modelling any reward function expressible as a formal language.
We prove that an agent equipped with such an abstract machine is able to solve a larger set of tasks than those utilising current approaches.
arXiv Detail & Related papers (2023-12-18T17:20:38Z) - ASAP: Automated Sequence Planning for Complex Robotic Assembly with
Physical Feasibility [27.424678100675163]
We present ASAP, a physics-based planning approach for automatically generating a sequence for general-shaped assemblies.
A search can be guided by either geometrics or graph neural networks trained on data with simulation labels.
We show the superior performance of ASAP at generating physically realistic assembly sequence plans on a large dataset of hundreds of complex product assemblies.
arXiv Detail & Related papers (2023-09-29T00:27:40Z) - Large Language Models as General Pattern Machines [64.75501424160748]
We show that pre-trained large language models (LLMs) are capable of autoregressively completing complex token sequences.
Surprisingly, pattern completion proficiency can be partially retained even when the sequences are expressed using tokens randomly sampled from the vocabulary.
In this work, we investigate how these zero-shot capabilities may be applied to problems in robotics.
arXiv Detail & Related papers (2023-07-10T17:32:13Z) - Transformers Learn Shortcuts to Automata [52.015990420075944]
We find that a low-depth Transformer can represent the computations of any finite-state automaton.
We show that a Transformer with $O(log T)$ layers can exactly replicate the computation of an automaton on an input sequence of length $T$.
We further investigate the brittleness of these solutions and propose potential mitigations.
arXiv Detail & Related papers (2022-10-19T17:45:48Z) - Benchmarking Multimodal AutoML for Tabular Data with Text Fields [83.43249184357053]
We assemble 18 multimodal data tables that each contain some text fields.
Our benchmark enables researchers to evaluate their own methods for supervised learning with numeric, categorical, and text features.
arXiv Detail & Related papers (2021-11-04T09:29:16Z) - Statistically Meaningful Approximation: a Case Study on Approximating
Turing Machines with Transformers [50.85524803885483]
This work proposes a formal definition of statistically meaningful (SM) approximation which requires the approximating network to exhibit good statistical learnability.
We study SM approximation for two function classes: circuits and Turing machines.
arXiv Detail & Related papers (2021-07-28T04:28:55Z) - On (co-lex) Ordering Automata [2.800608984818919]
We show that a canonical, minimum-width, partially-ordered automaton accepting a language L can be exhibited.
Using H we prove that the width of the language can be effectively computed from the minimum automaton recognizing the language.
arXiv Detail & Related papers (2021-06-04T07:41:58Z) - Visualizing computation in large-scale cellular automata [24.62657948019533]
Emergent processes in complex systems such as cellular automata can perform computations of increasing complexity.
We propose methods for coarse-graining cellular automata based on frequency analysis of cell states, clustering and autoencoders.
arXiv Detail & Related papers (2021-04-01T08:14:15Z) - A Passive Online Technique for Learning Hybrid Automata from
Input/Output Traces [0.0]
We propose a new technique for synthesizing hybrid automaton from the input-output traces of a non-linear cyber-physical system.
Similarity detection in non-linear behaviors is the main challenge for extracting such models.
Our approach is passive, meaning that it does not need interaction with the system during automata synthesis from the logged traces.
arXiv Detail & Related papers (2021-01-18T13:08:14Z) - Induction and Exploitation of Subgoal Automata for Reinforcement
Learning [75.55324974788475]
We present ISA, an approach for learning and exploiting subgoals in episodic reinforcement learning (RL) tasks.
ISA interleaves reinforcement learning with the induction of a subgoal automaton, an automaton whose edges are labeled by the task's subgoals.
A subgoal automaton also consists of two special states: a state indicating the successful completion of the task, and a state indicating that the task has finished without succeeding.
arXiv Detail & Related papers (2020-09-08T16:42:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.