A Waddington landscape for prototype learning in generalized Hopfield
networks
- URL: http://arxiv.org/abs/2312.03012v1
- Date: Mon, 4 Dec 2023 21:28:14 GMT
- Title: A Waddington landscape for prototype learning in generalized Hopfield
networks
- Authors: Nacer Eddine Boukacem, Allen Leary, Robin Th\'eriault, Felix Gottlieb,
Madhav Mani, Paul Fran\c{c}ois
- Abstract summary: We study the learning dynamics of Generalized Hopfield networks.
We observe a strong resemblance to the canalized, or low-dimensional, dynamics of cells as they differentiate.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Networks in machine learning offer examples of complex high-dimensional
dynamical systems reminiscent of biological systems. Here, we study the
learning dynamics of Generalized Hopfield networks, which permit a
visualization of internal memories. These networks have been shown to proceed
through a 'feature-to-prototype' transition, as the strength of network
nonlinearity is increased, wherein the learned, or terminal, states of internal
memories transition from mixed to pure states. Focusing on the prototype
learning dynamics of the internal memories we observe a strong resemblance to
the canalized, or low-dimensional, dynamics of cells as they differentiate
within a Waddingtonian landscape. Dynamically, we demonstrate that learning in
a Generalized Hopfield Network proceeds through sequential 'splits' in memory
space. Furthermore, order of splitting is interpretable and reproducible. The
dynamics between the splits are canalized in the Waddington sense -- robust to
variations in detailed aspects of the system. In attempting to make the analogy
a rigorous equivalence, we study smaller subsystems that exhibit similar
properties to the full system. We combine analytical calculations with
numerical simulations to study the dynamical emergence of the
feature-to-prototype transition, and the behaviour of splits in the landscape,
saddles points, visited during learning. We exhibit regimes where saddles
appear and disappear through saddle-node bifurcations, qualitatively changing
the distribution of learned memories as the strength of the nonlinearity is
varied -- allowing us to systematically investigate the mechanisms that
underlie the emergence of Waddingtonian dynamics. Memories can thus
differentiate in a predictive and controlled way, revealing new bridges between
experimental biology, dynamical systems theory, and machine learning.
Related papers
- A Generalized Framework for Multiscale State-Space Modeling with Nested Nonlinear Dynamics: An Application to Bayesian Learning under Switching Regimes [0.0]
We introduce a generalized framework for multiscale state-space modeling that incorporates nested nonlinear dynamics.
Our framework captures the complex interactions between fast and slow processes within systems.
We develop a Bayesian learning approach to estimate latent states and indicators corresponding to switching dynamics.
arXiv Detail & Related papers (2024-10-24T18:31:20Z) - Trans-Bifurcation Prediction of Dynamics in terms of Extreme Learning Machines with Control Inputs [0.49998148477760973]
We show that the entire structure of the bifurcations of a target one- parameter family of dynamical systems can be nearly reproduced by training on transient dynamics using only a few parameter values.
We propose a mechanism to explain this remarkable learning ability and discuss the relationship between the present results and similar results obtained by Kim et al.
arXiv Detail & Related papers (2024-10-17T07:34:23Z) - Learning System Dynamics without Forgetting [60.08612207170659]
Predicting trajectories of systems with unknown dynamics is crucial in various research fields, including physics and biology.
We present a novel framework of Mode-switching Graph ODE (MS-GODE), which can continually learn varying dynamics.
We construct a novel benchmark of biological dynamic systems, featuring diverse systems with disparate dynamics.
arXiv Detail & Related papers (2024-06-30T14:55:18Z) - Emergent learning in physical systems as feedback-based aging in a
glassy landscape [0.0]
We show that the learning dynamics resembles an aging process, where the system relaxes in response to repeated application of the feedback boundary forces.
We also observe that the square root of the mean-squared error as a function of epoch takes on a non-exponential form, which is a typical feature of glassy systems.
arXiv Detail & Related papers (2023-09-08T15:24:55Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Decomposed Linear Dynamical Systems (dLDS) for learning the latent
components of neural dynamics [6.829711787905569]
We propose a new decomposed dynamical system model that represents complex non-stationary and nonlinear dynamics of time series data.
Our model is trained through a dictionary learning procedure, where we leverage recent results in tracking sparse vectors over time.
In both continuous-time and discrete-time instructional examples we demonstrate that our model can well approximate the original system.
arXiv Detail & Related papers (2022-06-07T02:25:38Z) - Learning Individual Interactions from Population Dynamics with Discrete-Event Simulation Model [9.827590402695341]
We will explore the possibility of learning a discrete-event simulation representation of complex system dynamics.
Our results show that the algorithm can data-efficiently capture complex network dynamics in several fields with meaningful events.
arXiv Detail & Related papers (2022-05-04T21:33:56Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - Learning Stable Deep Dynamics Models [91.90131512825504]
We propose an approach for learning dynamical systems that are guaranteed to be stable over the entire state space.
We show that such learning systems are able to model simple dynamical systems and can be combined with additional deep generative models to learn complex dynamics.
arXiv Detail & Related papers (2020-01-17T00:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.