Reconstructing cellular automata rules from observations at
nonconsecutive times
- URL: http://arxiv.org/abs/2012.02179v2
- Date: Sun, 14 Feb 2021 22:22:19 GMT
- Title: Reconstructing cellular automata rules from observations at
nonconsecutive times
- Authors: Veit Elser
- Abstract summary: Recent experiments show that a deep neural network can be trained to predict the action of Conway's Game of Life automaton.
We describe an alternative network-like method, based on constraint projections, where this is possible.
We demonstrate the method on 1D binary cellular automata that take inputs from $n$ adjacent cells.
- Score: 7.056222499095849
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent experiments by Springer and Kenyon have shown that a deep neural
network can be trained to predict the action of $t$ steps of Conway's Game of
Life automaton given millions of examples of this action on random initial
states. However, training was never completely successful for $t>1$, and even
when successful, a reconstruction of the elementary rule ($t=1$) from $t>1$
data is not within the scope of what the neural network can deliver. We
describe an alternative network-like method, based on constraint projections,
where this is possible. From a single data item this method perfectly
reconstructs not just the automaton rule but also the states in the time steps
it did not see. For a unique reconstruction, the size of the initial state need
only be large enough that it and the $t-1$ states it evolves into contain all
possible automaton input patterns. We demonstrate the method on 1D binary
cellular automata that take inputs from $n$ adjacent cells. The unknown rules
in our experiments are not restricted to simple rules derived from a few linear
functions on the inputs (as in Game of Life), but include all $2^{2^n}$
possible rules on $n$ inputs. Our results extend to $n=6$, for which exhaustive
rule-search is not feasible. By relaxing translational symmetry in space and
also time, our method is attractive as a platform for the learning of binary
data, since the discreteness of the variables does not pose the same challenge
it does for gradient-based methods.
Related papers
- IT$^3$: Idempotent Test-Time Training [95.78053599609044]
This paper introduces Idempotent Test-Time Training (IT$3$), a novel approach to addressing the challenge of distribution shift.
IT$3$ is based on the universal property of idempotence.
We demonstrate the versatility of our approach across various tasks, including corrupted image classification.
arXiv Detail & Related papers (2024-10-05T15:39:51Z) - Transfer Learning for Latent Variable Network Models [18.31057192626801]
We study transfer learning for estimation in latent variable network models.
We show that if the latent variables are shared, then vanishing error is possible.
Our algorithm achieves $o(1)$ error and does not assume a parametric form on the source or target networks.
arXiv Detail & Related papers (2024-06-05T16:33:30Z) - Improved Algorithm for Adversarial Linear Mixture MDPs with Bandit
Feedback and Unknown Transition [71.33787410075577]
We study reinforcement learning with linear function approximation, unknown transition, and adversarial losses.
We propose a new algorithm that attains an $widetildeO(dsqrtHS3K + sqrtHSAK)$ regret with high probability.
arXiv Detail & Related papers (2024-03-07T15:03:50Z) - Self-Directed Linear Classification [50.659479930171585]
In online classification, a learner aims to predict their labels in an online fashion so as to minimize the total number of mistakes.
Here we study the power of choosing the prediction order and establish the first strong separation between worst-order and random-order learning.
arXiv Detail & Related papers (2023-08-06T15:38:44Z) - Generalization and Stability of Interpolating Neural Networks with
Minimal Width [37.908159361149835]
We investigate the generalization and optimization of shallow neural-networks trained by gradient in the interpolating regime.
We prove the training loss number minimizations $m=Omega(log4 (n))$ neurons and neurons $Tapprox n$.
With $m=Omega(log4 (n))$ neurons and $Tapprox n$, we bound the test loss training by $tildeO (1/)$.
arXiv Detail & Related papers (2023-02-18T05:06:15Z) - Training Overparametrized Neural Networks in Sublinear Time [14.918404733024332]
Deep learning comes at a tremendous computational and energy cost.
We present a new and a subset of binary neural networks, as a small subset of search trees, where each corresponds to a subset of search trees (Ds)
We believe this view would have further applications in analysis analysis of deep networks (Ds)
arXiv Detail & Related papers (2022-08-09T02:29:42Z) - High-dimensional Asymptotics of Feature Learning: How One Gradient Step
Improves the Representation [89.21686761957383]
We study the first gradient descent step on the first-layer parameters $boldsymbolW$ in a two-layer network.
Our results demonstrate that even one step can lead to a considerable advantage over random features.
arXiv Detail & Related papers (2022-05-03T12:09:59Z) - Does Preprocessing Help Training Over-parameterized Neural Networks? [19.64638346701198]
We propose two novel preprocessing ideas to bypass the $Omega(mnd)$ barrier.
Our results provide theoretical insights for a large number of previously established fast training methods.
arXiv Detail & Related papers (2021-10-09T18:16:23Z) - Improved Sample Complexity for Incremental Autonomous Exploration in
MDPs [132.88757893161699]
We learn the set of $epsilon$-optimal goal-conditioned policies attaining all states that are incrementally reachable within $L$ steps.
DisCo is the first algorithm that can return an $epsilon/c_min$-optimal policy for any cost-sensitive shortest-path problem.
arXiv Detail & Related papers (2020-12-29T14:06:09Z) - Shuffling Recurrent Neural Networks [97.72614340294547]
We propose a novel recurrent neural network model, where the hidden state $h_t$ is obtained by permuting the vector elements of the previous hidden state $h_t-1$.
In our model, the prediction is given by a second learned function, which is applied to the hidden state $s(h_t)$.
arXiv Detail & Related papers (2020-07-14T19:36:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.