Harnessing intuitive local evolution rules for physical learning
- URL: http://arxiv.org/abs/2507.19561v1
- Date: Fri, 25 Jul 2025 10:51:42 GMT
- Title: Harnessing intuitive local evolution rules for physical learning
- Authors: Roie Ezraty, Menachem Stern, Shmuel M. Rubinstein,
- Abstract summary: We introduce a training scheme for physical systems that minimize power dissipation in which only boundary parameters are externally controlled.<n>Using this scheme, these Boundary-Enabled Adaptive State Tuning Systems learn by exploiting local phys- ical rules.<n>Our scheme, BEASTAL (BEAST-Adaline), is the closest analog of the Adaline algorithm for such systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine Learning, however popular and accessible, is computationally intensive and highly power-consuming, prompting interest in alternative physical implementations of learning tasks. We introduce a training scheme for physical systems that minimize power dissipation in which only boundary parameters (i.e. inputs and outputs) are externally controlled. Using this scheme, these Boundary-Enabled Adaptive State Tuning Systems (BEASTS) learn by exploiting local phys- ical rules. Our scheme, BEASTAL (BEAST-Adaline), is the closest analog of the Adaline algorithm for such systems. We demonstrate this autonomous learning in silico for regression and classifi- cation tasks. Our approach advances previous physical learning schemes by using intuitive, local evolution rules without requiring large-scale memory or complex internal architectures. BEASTAL can perform any linear task, achieving best performance when the local evolution rule is non-linear.
Related papers
- LeARN: Learnable and Adaptive Representations for Nonlinear Dynamics in System Identification [0.0]
We introduce a nonlinear system identification framework called LeARN.<n>It transcends the need for prior domain knowledge by learning the library of basis functions directly from data.<n>We validate our framework on the Neural Fly dataset, showcasing its robust adaptation and capabilities.
arXiv Detail & Related papers (2024-12-16T18:03:23Z) - Action-Quantized Offline Reinforcement Learning for Robotic Skill
Learning [68.16998247593209]
offline reinforcement learning (RL) paradigm provides recipe to convert static behavior datasets into policies that can perform better than the policy that collected the data.
In this paper, we propose an adaptive scheme for action quantization.
We show that several state-of-the-art offline RL methods such as IQL, CQL, and BRAC improve in performance on benchmarks when combined with our proposed discretization scheme.
arXiv Detail & Related papers (2023-10-18T06:07:10Z) - Controlling dynamical systems to complex target states using machine
learning: next-generation vs. classical reservoir computing [68.8204255655161]
Controlling nonlinear dynamical systems using machine learning allows to drive systems into simple behavior like periodicity but also to more complex arbitrary dynamics.
We show first that classical reservoir computing excels at this task.
In a next step, we compare those results based on different amounts of training data to an alternative setup, where next-generation reservoir computing is used instead.
It turns out that while delivering comparable performance for usual amounts of training data, next-generation RC significantly outperforms in situations where only very limited data is available.
arXiv Detail & Related papers (2023-07-14T07:05:17Z) - An FPGA Architecture for Online Learning using the Tsetlin Machine [5.140342614848069]
This paper proposes a novel field-programmable gate-array infrastructure for online learning.
It implements a low-complexity machine learning algorithm called the Tsetlin Machine.
We present use cases for online learning using the proposed infrastructure and demonstrate the energy/performance/accuracy trade-offs.
arXiv Detail & Related papers (2023-06-01T13:33:26Z) - Hierarchical-Hyperplane Kernels for Actively Learning Gaussian Process
Models of Nonstationary Systems [5.1672267755831705]
We present a kernel family that incorporates a partitioning that is learnable via gradient-based methods.
We empirically demonstrate excellent performance on various active learning tasks.
arXiv Detail & Related papers (2023-03-17T14:50:51Z) - Smoothed Online Learning for Prediction in Piecewise Affine Systems [43.64498536409903]
This paper builds on the recently developed smoothed online learning framework.
It provides the first algorithms for prediction and simulation in piecewise affine systems.
arXiv Detail & Related papers (2023-01-26T15:54:14Z) - Efficient Online Learning with Memory via Frank-Wolfe Optimization:
Algorithms with Bounded Dynamic Regret and Applications to Control [15.588080817106563]
We introduce the first projection-free meta-base learning algorithm with memory that minimizes dynamic regret.
We are motivated by artificial intelligence applications where autonomous agents need to adapt to time-varying environments.
arXiv Detail & Related papers (2023-01-02T01:12:29Z) - The least-control principle for learning at equilibrium [65.2998274413952]
We present a new principle for learning equilibrium recurrent neural networks, deep equilibrium models, or meta-learning.
Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems.
arXiv Detail & Related papers (2022-07-04T11:27:08Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Controlling nonlinear dynamical systems into arbitrary states using
machine learning [77.34726150561087]
We propose a novel and fully data driven control scheme which relies on machine learning (ML)
Exploiting recently developed ML-based prediction capabilities of complex systems, we demonstrate that nonlinear systems can be forced to stay in arbitrary dynamical target states coming from any initial state.
Having this highly flexible control scheme with little demands on the amount of required data on hand, we briefly discuss possible applications that range from engineering to medicine.
arXiv Detail & Related papers (2021-02-23T16:58:26Z) - Guided Uncertainty-Aware Policy Optimization: Combining Learning and
Model-Based Strategies for Sample-Efficient Policy Learning [75.56839075060819]
Traditional robotic approaches rely on an accurate model of the environment, a detailed description of how to perform the task, and a robust perception system to keep track of the current state.
reinforcement learning approaches can operate directly from raw sensory inputs with only a reward signal to describe the task, but are extremely sample-inefficient and brittle.
In this work, we combine the strengths of model-based methods with the flexibility of learning-based methods to obtain a general method that is able to overcome inaccuracies in the robotics perception/actuation pipeline.
arXiv Detail & Related papers (2020-05-21T19:47:05Z) - Online Constrained Model-based Reinforcement Learning [13.362455603441552]
Key requirement is the ability to handle continuous state and action spaces while remaining within a limited time and resource budget.
We propose a model based approach that combines Gaussian Process regression and Receding Horizon Control.
We test our approach on a cart pole swing-up environment and demonstrate the benefits of online learning on an autonomous racing task.
arXiv Detail & Related papers (2020-04-07T15:51:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.