Computational Hierarchy of Elementary Cellular Automata
- URL: http://arxiv.org/abs/2108.00415v1
- Date: Sun, 1 Aug 2021 10:00:54 GMT
- Title: Computational Hierarchy of Elementary Cellular Automata
- Authors: Barbora Hudcov\'a and Tom\'a\v{s} Mikolov
- Abstract summary: We study the ability of cellular automata to emulate one another.
We show that certain chaotic automata are the only ones that cannot emulate any automata non-trivially.
We believe our work can help design parallel computational systems that are Turing-complete and also computationally efficient.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The complexity of cellular automata is traditionally measured by their
computational capacity. However, it is difficult to choose a challenging set of
computational tasks suitable for the parallel nature of such systems. We study
the ability of automata to emulate one another, and we use this notion to
define such a set of naturally emerging tasks. We present the results for
elementary cellular automata, although the core ideas can be extended to other
computational systems. We compute a graph showing which elementary cellular
automata can be emulated by which and show that certain chaotic automata are
the only ones that cannot emulate any automata non-trivially. Finally, we use
the emulation notion to suggest a novel definition of chaos that we believe is
suitable for discrete computational systems. We believe our work can help
design parallel computational systems that are Turing-complete and also
computationally efficient.
Related papers
- Computational Dynamical Systems [0.6138671548064356]
We give definitions for what it means for a smooth dynamical system to simulate a Turing machine.
We show that 'chaotic' dynamical systems and 'integrable' dynamical systems cannot robustly simulate universal Turing machines.
More broadly, our work elucidates what it means for one'machine' to simulate another, and emphasizes the necessity of defining low-complexity 'encoders' and 'decoders'
arXiv Detail & Related papers (2024-09-18T17:51:48Z) - Randomized Polar Codes for Anytime Distributed Machine Learning [66.46612460837147]
We present a novel distributed computing framework that is robust to slow compute nodes, and is capable of both approximate and exact computation of linear operations.
We propose a sequential decoding algorithm designed to handle real valued data while maintaining low computational complexity for recovery.
We demonstrate the potential applications of this framework in various contexts, such as large-scale matrix multiplication and black-box optimization.
arXiv Detail & Related papers (2023-09-01T18:02:04Z) - A full-stack view of probabilistic computing with p-bits: devices,
architectures and algorithms [0.014319921806060482]
We provide a full-stack review of probabilistic computing with p-bits.
We argue that p-bits could be used to build energy-efficient probabilistic systems.
We outline the main applications of probabilistic computers ranging from machine learning to AI.
arXiv Detail & Related papers (2023-02-13T15:36:07Z) - The Basis of Design Tools for Quantum Computing: Arrays, Decision
Diagrams, Tensor Networks, and ZX-Calculus [55.58528469973086]
Quantum computers promise to efficiently solve important problems classical computers never will.
A fully automated quantum software stack needs to be developed.
This work provides a look "under the hood" of today's tools and showcases how these means are utilized in them, e.g., for simulation, compilation, and verification of quantum circuits.
arXiv Detail & Related papers (2023-01-10T19:00:00Z) - Transformers Learn Shortcuts to Automata [52.015990420075944]
We find that a low-depth Transformer can represent the computations of any finite-state automaton.
We show that a Transformer with $O(log T)$ layers can exactly replicate the computation of an automaton on an input sequence of length $T$.
We further investigate the brittleness of these solutions and propose potential mitigations.
arXiv Detail & Related papers (2022-10-19T17:45:48Z) - A Relative Church-Turing-Deutsch Thesis from Special Relativity and
Undecidability [0.0]
We show that computing the time, space, or error accumulated by the global simulator are simulation properties and therefore are undecidable.
These simulation properties give rise to special relativistic effects in the relative model which we use to construct a Church-Turing-Deutsch thesis.
arXiv Detail & Related papers (2022-06-13T18:58:41Z) - Classification of Discrete Dynamical Systems Based on Transients [0.0]
We present a novel classification method applicable to any class of deterministic discrete space and time dynamical systems.
We were able to identify a critical region of behavior that corresponds to a phase transition from ordered behavior to chaos.
Our work can be used to design systems in which complex structures emerge.
arXiv Detail & Related papers (2021-08-03T15:34:01Z) - Statistically Meaningful Approximation: a Case Study on Approximating
Turing Machines with Transformers [50.85524803885483]
This work proposes a formal definition of statistically meaningful (SM) approximation which requires the approximating network to exhibit good statistical learnability.
We study SM approximation for two function classes: circuits and Turing machines.
arXiv Detail & Related papers (2021-07-28T04:28:55Z) - Visualizing computation in large-scale cellular automata [24.62657948019533]
Emergent processes in complex systems such as cellular automata can perform computations of increasing complexity.
We propose methods for coarse-graining cellular automata based on frequency analysis of cell states, clustering and autoencoders.
arXiv Detail & Related papers (2021-04-01T08:14:15Z) - Predictive Coding Approximates Backprop along Arbitrary Computation
Graphs [68.8204255655161]
We develop a strategy to translate core machine learning architectures into their predictive coding equivalents.
Our models perform equivalently to backprop on challenging machine learning benchmarks.
Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry.
arXiv Detail & Related papers (2020-06-07T15:35:47Z) - Reservoir memory machines [79.79659145328856]
We propose reservoir memory machines, which are able to solve some of the benchmark tests for Neural Turing Machines.
Our model can also be seen as an extension of echo state networks with an external memory, enabling arbitrarily long storage without interference.
arXiv Detail & Related papers (2020-02-12T01:45:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.