A Path to Universal Neural Cellular Automata
- URL: http://arxiv.org/abs/2505.13058v2
- Date: Tue, 20 May 2025 21:12:51 GMT
- Title: A Path to Universal Neural Cellular Automata
- Authors: Gabriel Béna, Maxence Faldor, Dan F. M. Goodman, Antoine Cully,
- Abstract summary: This work explores the potential of neural cellular automata to develop a continuous Universal Cellular Automaton.<n>We introduce a cellular automaton model, objective functions and training strategies to guide neural cellular automata toward universal computation in a continuous setting.
- Score: 6.7822488410082755
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cellular automata have long been celebrated for their ability to generate complex behaviors from simple, local rules, with well-known discrete models like Conway's Game of Life proven capable of universal computation. Recent advancements have extended cellular automata into continuous domains, raising the question of whether these systems retain the capacity for universal computation. In parallel, neural cellular automata have emerged as a powerful paradigm where rules are learned via gradient descent rather than manually designed. This work explores the potential of neural cellular automata to develop a continuous Universal Cellular Automaton through training by gradient descent. We introduce a cellular automaton model, objective functions and training strategies to guide neural cellular automata toward universal computation in a continuous setting. Our experiments demonstrate the successful training of fundamental computational primitives - such as matrix multiplication and transposition - culminating in the emulation of a neural network solving the MNIST digit classification task directly within the cellular automata state. These results represent a foundational step toward realizing analog general-purpose computers, with implications for understanding universal computation in continuous dynamics and advancing the automated discovery of complex cellular automata behaviors via machine learning.
Related papers
- Learning Linear Attention in Polynomial Time [115.68795790532289]
We provide the first results on learnability of single-layer Transformers with linear attention.
We show that linear attention may be viewed as a linear predictor in a suitably defined RKHS.
We show how to efficiently identify training datasets for which every empirical riskr is equivalent to the linear Transformer.
arXiv Detail & Related papers (2024-10-14T02:41:01Z) - Spiking based Cellular Learning Automata (SCLA) algorithm for mobile
robot motion formulation [0.0]
Spiking based Cellular Learning Automata is proposed for a mobile robot to get to the target from any random initial point.
The proposed method is a result of the integration of both cellular automata and spiking neural networks.
arXiv Detail & Related papers (2023-09-01T04:16:23Z) - Game of Intelligent Life [0.0]
Recent advances in the field have combined CA with convolutional neural networks to achieve self-regenerating images.
The goal of this project is to use the idea of idea of neural cellular automata to grow prediction machines.
arXiv Detail & Related papers (2023-01-02T23:06:26Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Computational Hierarchy of Elementary Cellular Automata [0.0]
We study the ability of cellular automata to emulate one another.
We show that certain chaotic automata are the only ones that cannot emulate any automata non-trivially.
We believe our work can help design parallel computational systems that are Turing-complete and also computationally efficient.
arXiv Detail & Related papers (2021-08-01T10:00:54Z) - Towards self-organized control: Using neural cellular automata to
robustly control a cart-pole agent [62.997667081978825]
We use neural cellular automata to control a cart-pole agent.
We trained the model using deep-Q learning, where the states of the output cells were used as the Q-value estimates to be optimized.
arXiv Detail & Related papers (2021-06-29T10:49:42Z) - Visualizing computation in large-scale cellular automata [24.62657948019533]
Emergent processes in complex systems such as cellular automata can perform computations of increasing complexity.
We propose methods for coarse-graining cellular automata based on frequency analysis of cell states, clustering and autoencoders.
arXiv Detail & Related papers (2021-04-01T08:14:15Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - Induction and Exploitation of Subgoal Automata for Reinforcement
Learning [75.55324974788475]
We present ISA, an approach for learning and exploiting subgoals in episodic reinforcement learning (RL) tasks.
ISA interleaves reinforcement learning with the induction of a subgoal automaton, an automaton whose edges are labeled by the task's subgoals.
A subgoal automaton also consists of two special states: a state indicating the successful completion of the task, and a state indicating that the task has finished without succeeding.
arXiv Detail & Related papers (2020-09-08T16:42:55Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.