Self-Organising Memristive Networks as Physical Learning Systems
- URL: http://arxiv.org/abs/2509.00747v1
- Date: Sun, 31 Aug 2025 08:44:02 GMT
- Title: Self-Organising Memristive Networks as Physical Learning Systems
- Authors: Francesco Caravelli, Gianluca Milano, Adam Z. Stieg, Carlo Ricciardi, Simon Anthony Brown, Zdenka Kuncic,
- Abstract summary: Learning with physical systems is an emerging paradigm that seeks to harness the intrinsic nonlinear dynamics of physical substrates for learning.<n>This Perspective highlights one promising approach using physical networks comprised of resistive memory nanoscale components.<n>The overarching aim of this Perspective is to show how the convergence of nanotechnology, statistical physics, complex systems, and self-organising principles offers a unique opportunity to advance a new generation of physical intelligence technologies.
- Score: 0.8752279866335758
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning with physical systems is an emerging paradigm that seeks to harness the intrinsic nonlinear dynamics of physical substrates for learning. The impetus for a paradigm shift in how hardware is used for computational intelligence stems largely from the unsustainability of artificial neural network software implemented on conventional transistor-based hardware. This Perspective highlights one promising approach using physical networks comprised of resistive memory nanoscale components with dynamically reconfigurable, self-organising electrical circuitry. Experimental advances have revealed the non-trivial interactions within these Self-Organising Memristive Networks (SOMNs), offering insights into their collective nonlinear and adaptive dynamics, and how these properties can be harnessed for learning using different hardware implementations. Theoretical approaches, including mean-field theory, graph theory, and concepts from disordered systems, reveal deeper insights into the dynamics of SOMNs, especially during transitions between different conductance states where criticality and other dynamical phase transitions emerge in both experiments and models. Furthermore, parallels between adaptive dynamics in SOMNs and plasticity in biological neuronal networks suggest the potential for realising energy-efficient, brain-like continual learning. SOMNs thus offer a promising route toward embedded edge intelligence, unlocking real-time decision-making for autonomous systems, dynamic sensing, and personalised healthcare, by enabling embedded learning in resource-constrained environments. The overarching aim of this Perspective is to show how the convergence of nanotechnology, statistical physics, complex systems, and self-organising principles offers a unique opportunity to advance a new generation of physical intelligence technologies.
Related papers
- Advancing Opinion Dynamics Modeling with Neural Diffusion-Convection-Reaction Equation [13.884804908187391]
We present the OPINN, a physics-informed neural framework for opinion dynamics modeling.<n>Building upon the Neural ODEs, we define the neural opinion dynamics to coordinate neural networks with physical priors.<n> OPINN achieves state-of-the-art performance in opinion evolution forecasting.
arXiv Detail & Related papers (2026-02-05T07:41:19Z) - Intelligence Foundation Model: A New Perspective to Approach Artificial General Intelligence [55.07411490538404]
We propose a new perspective for approaching artificial general intelligence (AGI) through an intelligence foundation model (IFM)<n>IFM aims to acquire the underlying mechanisms of intelligence by learning directly from diverse intelligent behaviors.
arXiv Detail & Related papers (2025-11-13T09:28:41Z) - Machine Learning and Control: Foundations, Advances, and Perspectives [0.0]
We show that concepts such as simultaneous and ensemble controllability offer new insights into the classification and representation properties of deep neural networks.<n>We also explore the relationship between dynamic and static neural networks, where depth is traded for width.<n>We describe how classical properties of diffusion processes, long established in the context of partial differential equations, contribute to explaining the success of modern generative artificial intelligence.
arXiv Detail & Related papers (2025-09-30T10:47:26Z) - Multi-Plasticity Synergy with Adaptive Mechanism Assignment for Training Spiking Neural Networks [10.519687559399623]
Spiking Neural Networks (SNNs) are promising brain-inspired models known for low power consumption and superior potential for temporal processing.<n>We propose a biologically inspired training framework that incorporates multiple synergistic plasticity mechanisms for more effective SNN training.
arXiv Detail & Related papers (2025-08-19T09:18:35Z) - Dynamical Alignment: A Principle for Adaptive Neural Computation [1.0974389213466795]
We show that a fixed neural structure can operate in fundamentally different computational modes, driven not by its structure but by the temporal dynamics of its input signals.<n>We find this computational advantage emerges from a timescale alignment between input dynamics and neuronal integration.<n>This principle offers a unified, computable perspective on long-observed dualities in neuroscience, from stability-plasticity dilemma to segregation-integration dynamic.
arXiv Detail & Related papers (2025-08-13T06:35:57Z) - Transformer Dynamics: A neuroscientific approach to interpretability of large language models [0.0]
We focus on the residual stream (RS) in transformer models, conceptualizing it as a dynamical system evolving across layers.<n>We find that activations of individual RS units exhibit strong continuity across layers, despite the RS being a non-privileged basis.<n>In reduced-dimensional spaces, the RS follows a curved trajectory with attractor-like dynamics in the lower layers.
arXiv Detail & Related papers (2025-02-17T18:49:40Z) - Network Dynamics-Based Framework for Understanding Deep Neural Networks [11.44947569206928]
We propose a theoretical framework to analyze learning dynamics through the lens of dynamical systems theory.<n>We redefine the notions of linearity and nonlinearity in neural networks by introducing two fundamental transformation units at the neuron level.<n>Different transformation modes lead to distinct collective behaviors in weight vector organization, different modes of information extraction, and the emergence of qualitatively different learning phases.
arXiv Detail & Related papers (2025-01-05T04:23:21Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Noise-Aware Training of Neuromorphic Dynamic Device Networks [2.2691986670431197]
We propose a novel, noise-aware methodology for training device networks.
Our approach employs backpropagation through time and cascade learning, allowing networks to effectively exploit the temporal properties of physical devices.
arXiv Detail & Related papers (2024-01-14T22:46:53Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.