NoiseNCA: Noisy Seed Improves Spatio-Temporal Continuity of Neural Cellular Automata
- URL: http://arxiv.org/abs/2404.06279v3
- Date: Fri, 14 Jun 2024 11:48:51 GMT
- Title: NoiseNCA: Noisy Seed Improves Spatio-Temporal Continuity of Neural Cellular Automata
- Authors: Ehsan Pajouheshgar, Yitao Xu, Sabine Süsstrunk,
- Abstract summary: NCA is a class of Cellular Automata where the update rule is parameterized by a neural network.
We show that existing NCA models tend to overfit the training discretization.
We propose a solution that utilizes uniform noise as the initial condition.
- Score: 23.73063532045145
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural Cellular Automata (NCA) is a class of Cellular Automata where the update rule is parameterized by a neural network that can be trained using gradient descent. In this paper, we focus on NCA models used for texture synthesis, where the update rule is inspired by partial differential equations (PDEs) describing reaction-diffusion systems. To train the NCA model, the spatio-temporal domain is discretized, and Euler integration is used to numerically simulate the PDE. However, whether a trained NCA truly learns the continuous dynamic described by the corresponding PDE or merely overfits the discretization used in training remains an open question. We study NCA models at the limit where space-time discretization approaches continuity. We find that existing NCA models tend to overfit the training discretization, especially in the proximity of the initial condition, also called "seed". To address this, we propose a solution that utilizes uniform noise as the initial condition. We demonstrate the effectiveness of our approach in preserving the consistency of NCA dynamics across a wide range of spatio-temporal granularities. Our improved NCA model enables two new test-time interactions by allowing continuous control over the speed of pattern formation and the scale of the synthesized patterns. We demonstrate this new NCA feature in our interactive online demo. Our work reveals that NCA models can learn continuous dynamics and opens new venues for NCA research from a dynamical system's perspective.
Related papers
- Harnessing Neural Unit Dynamics for Effective and Scalable Class-Incremental Learning [38.09011520275557]
Class-incremental learning (CIL) aims to train a model to learn new classes from non-stationary data streams without forgetting old ones.
We propose a new kind of connectionist model by tailoring neural unit dynamics that adapt the behavior of neural networks for CIL.
arXiv Detail & Related papers (2024-06-04T15:47:03Z) - Emergent Dynamics in Neural Cellular Automata [23.73063532045145]
We investigate the relationship between the Neural Cellular Automata architecture and the emergent dynamics of the trained models.
Our analysis reveals that the disparity and proportionality between these two variables have a strong correlation with the emergent dynamics in the NCA output.
arXiv Detail & Related papers (2024-04-09T15:54:03Z) - Exploring Multiple Neighborhood Neural Cellular Automata (MNNCA) for
Enhanced Texture Learning [0.0]
Cellular Automata (CA) have long been foundational in simulating dynamical systems.
Recent innovations have brought Neural Cellular Automata (NCA) into the realm of deep learning.
NCA allows NCAs to be trained via gradient descent, enabling them to evolve into specific shapes, generate textures, and mimic behaviors such as swarming.
Our research explores enhancing the NCA framework by incorporating multiple neighborhoods and introducing structured noise for seed states.
arXiv Detail & Related papers (2023-10-27T15:16:19Z) - Learning spatio-temporal patterns with Neural Cellular Automata [0.0]
We train NCA to learn complex dynamics from time series of images and PDE trajectories.
We extend NCA to capture both transient and stable structures within the same system.
Being able to learn arbitrary dynamics gives NCA great potential as a data driven modelling framework.
arXiv Detail & Related papers (2023-10-23T11:16:32Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - Neural Delay Differential Equations: System Reconstruction and Image
Classification [14.59919398960571]
We propose a new class of continuous-depth neural networks with delay, named Neural Delay Differential Equations (NDDEs)
Compared to NODEs, NDDEs have a stronger capacity of nonlinear representations.
We achieve lower loss and higher accuracy not only for the data produced synthetically but also for the CIFAR10, a well-known image dataset.
arXiv Detail & Related papers (2023-04-11T16:09:28Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Neural Abstractions [72.42530499990028]
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
arXiv Detail & Related papers (2023-01-27T12:38:09Z) - Training Generative Adversarial Networks by Solving Ordinary
Differential Equations [54.23691425062034]
We study the continuous-time dynamics induced by GAN training.
From this perspective, we hypothesise that instabilities in training GANs arise from the integration error.
We experimentally verify that well-known ODE solvers (such as Runge-Kutta) can stabilise training.
arXiv Detail & Related papers (2020-10-28T15:23:49Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.