Learning with springs and sticks
- URL: http://arxiv.org/abs/2508.19015v1
- Date: Tue, 26 Aug 2025 13:26:26 GMT
- Title: Learning with springs and sticks
- Authors: Luis Mantilla Calderón, Alán Aspuru-Guzik,
- Abstract summary: We study a simple dynamical system composed of springs and sticks capable of arbitrarily approximating any continuous function.<n>We apply the proposed simulation system to regression tasks and show that its performance is comparable to that of multi-layer perceptrons.<n>We empirically find a emphthermodynamic learning barrier for the system caused by the fluctuations of the environment.
- Score: 6.765839157891597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning is a physical process. Here, we aim to study a simple dynamical system composed of springs and sticks capable of arbitrarily approximating any continuous function. The main idea of our work is to use the sticks to mimic a piecewise-linear approximation of the given function, use the potential energy of springs to encode a desired mean squared error loss function, and converge to a minimum-energy configuration via dissipation. We apply the proposed simulation system to regression tasks and show that its performance is comparable to that of multi-layer perceptrons. In addition, we study the thermodynamic properties of the system and find a relation between the free energy change of the system and its ability to learn an underlying data distribution. We empirically find a \emph{thermodynamic learning barrier} for the system caused by the fluctuations of the environment, whereby the system cannot learn if its change in free energy hits such a barrier. We believe this simple model can help us better understand learning systems from a physical point of view.
Related papers
- Energy Loss Functions for Physical Systems [16.10782090682612]
We propose a framework to leverage physical information directly into the loss function for prediction and generative modeling tasks.<n>We demonstrate our approach on molecular generation and spin ground-state prediction and report significant improvements over baselines.
arXiv Detail & Related papers (2025-11-03T21:58:36Z) - The Importance of Being Lazy: Scaling Limits of Continual Learning [60.97756735877614]
We show that increasing model width is only beneficial when it reduces the amount of feature learning, yielding more laziness.<n>We study the intricate relationship between feature learning, task non-stationarity, and forgetting, finding that high feature learning is only beneficial with highly similar tasks.
arXiv Detail & Related papers (2025-06-20T10:12:38Z) - Learning Stochastic Thermodynamics Directly from Correlation and Trajectory-Fluctuation Currents [0.0]
Currents have recently gained increased attention for their role in bounding entropy production.<n>We introduce a fundamental relationship between the cumulant currents there and standard machine-learning loss functions.<n>These loss functions reproduce results derived both from TURs and other methods.<n>More significantly, they open a path to discover new loss functions for previously inaccessible quantities.
arXiv Detail & Related papers (2025-04-26T19:42:09Z) - Predicting the Energy Landscape of Stochastic Dynamical System via Physics-informed Self-supervised Learning [27.544116710935278]
Energy landscapes play a crucial role in shaping dynamics of many real-world complex systems.<n>We propose a physics-informed self-supervised learning method to learn the energy landscape from the evolution trajectories of the system.
arXiv Detail & Related papers (2025-02-24T04:26:26Z) - Discovering Physics Laws of Dynamical Systems via Invariant Function Learning [51.84691955495693]
We consider learning underlying laws of dynamical systems governed by ordinary differential equations (ODE)<n>We propose a new method, known as textbfDisentanglement of textbfInvariant textbfFunctions (DIF)<n>Our code has been released as part of the AIRS library.
arXiv Detail & Related papers (2025-02-06T20:46:50Z) - Pioneer: Physics-informed Riemannian Graph ODE for Entropy-increasing Dynamics [61.70424540412608]
We present a physics-informed graph ODE for a wide range of entropy-increasing dynamic systems.<n>We report the provable entropy non-decreasing of our formulation, obeying the physics laws.<n> Empirical results show the superiority of Pioneer on real datasets.
arXiv Detail & Related papers (2025-02-05T14:54:30Z) - Energy Transformer [64.22957136952725]
Our work combines aspects of three promising paradigms in machine learning, namely, attention mechanism, energy-based models, and associative memory.
We propose a novel architecture, called the Energy Transformer (or ET for short), that uses a sequence of attention layers that are purposely designed to minimize a specifically engineered energy function.
arXiv Detail & Related papers (2023-02-14T18:51:22Z) - The problem of engines in statistical physics [62.997667081978825]
Engines are open systems that can generate work cyclically, at the expense of an external disequilibrium.
Recent advances in the theory of open quantum systems point to a more realistic description of autonomous engines.
We show how the external loading force and the thermal noise may be incorporated into the relevant equations of motion.
arXiv Detail & Related papers (2021-08-17T03:59:09Z) - Using the Environment to Understand non-Markovian Open Quantum Systems [0.0]
We show how to use system correlations, calculated by any method, to infer any correlation function of a Gaussian environment.
In order to obtain accurate bath dynamics, we exploit a numerically exact approach to simulating the system dynamics.
arXiv Detail & Related papers (2021-06-08T09:43:03Z) - Reward Propagation Using Graph Convolutional Networks [61.32891095232801]
We propose a new framework for learning potential functions by leveraging ideas from graph representation learning.
Our approach relies on Graph Convolutional Networks which we use as a key ingredient in combination with the probabilistic inference view of reinforcement learning.
arXiv Detail & Related papers (2020-10-06T04:38:16Z) - Causal Discovery in Physical Systems from Videos [123.79211190669821]
Causal discovery is at the core of human cognition.
We consider the task of causal discovery from videos in an end-to-end fashion without supervision on the ground-truth graph structure.
arXiv Detail & Related papers (2020-07-01T17:29:57Z) - Accurately Solving Physical Systems with Graph Learning [22.100386288615006]
We introduce a novel method to accelerate iterative solvers for physical systems with graph networks.
Unlike existing methods that aim to learn physical systems in an end-to-end manner, our approach guarantees long-term stability.
Our method improves the run time performance of traditional iterative solvers.
arXiv Detail & Related papers (2020-06-06T15:48:34Z) - Scale bridging materials physics: Active learning workflows and
integrable deep neural networks for free energy function representations in
alloys [0.0]
In mechano-chemically interacting materials systems, even consideration of only compositions, order parameters and strains can render the free energy to be reasonably high-dimensional.
In proposing the free energy as a paradigm for scale bridging, we have previously exploited neural networks for their representation of such high-dimensional functions.
We have developed an integrable deep neural network (IDNN) that can be trained to free energy derivative data obtained from atomic scale models and statistical mechanics, then analytically integrated to recover a free energy density function.
arXiv Detail & Related papers (2020-01-30T03:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.