Learning Beyond Optimization: Stress-Gated Dynamical Regime Regulation in Autonomous Systems
- URL: http://arxiv.org/abs/2602.18581v1
- Date: Fri, 20 Feb 2026 19:39:56 GMT
- Title: Learning Beyond Optimization: Stress-Gated Dynamical Regime Regulation in Autonomous Systems
- Authors: Sheng Ran,
- Abstract summary: We propose a framework for learning without an explicit objective.<n>Instead of minimizing external error signals, the system evaluates the intrinsic health of its own internal dynamics.<n>Our results suggest a possible route toward autonomous learning systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite their apparent diversity, modern machine learning methods can be reduced to a remarkably simple core principle: learning is achieved by continuously optimizing parameters to minimize or maximize a scalar objective function. This paradigm has been extraordinarily successful for well-defined tasks where goals are fixed and evaluation criteria are explicit. However, if artificial systems are to move toward true autonomy-operating over long horizons and across evolving contexts-objectives may become ill-defined, shifting, or entirely absent. In such settings, a fundamental question emerges: in the absence of an explicit objective function, how can a system determine whether its ongoing internal dynamics are productive or pathological? And how should it regulate structural change without external supervision? In this work, we propose a dynamical framework for learning without an explicit objective. Instead of minimizing external error signals, the system evaluates the intrinsic health of its own internal dynamics and regulates structural plasticity accordingly. We introduce a two-timescale architecture that separates fast state evolution from slow structural adaptation, coupled through an internally generated stress variable that accumulates evidence of persistent dynamical dysfunction. Structural modification is then triggered not continuously, but as a state-dependent event. Through a minimal toy model, we demonstrate that this stress-regulated mechanism produces temporally segmented, self-organized learning episodes without reliance on externally defined goals. Our results suggest a possible route toward autonomous learning systems capable of self-assessment and internally regulated structural reorganization.
Related papers
- Self-adapting Robotic Agents through Online Continual Reinforcement Learning with World Model Feedback [2.165723322157105]
This work presents a framework for online Continual Reinforcement Learning that enables automated adaptation during deployment.<n>The proposed method leverages world model prediction residuals to detect out-of-distribution events and automatically trigger finetuning.<n>The approach is validated on a variety of contemporary continuous control problems, including a quadruped robot in high-fidelity simulation.
arXiv Detail & Related papers (2026-03-04T13:07:42Z) - Human-Inspired Continuous Learning of Internal Reasoning Processes: Learning How to Think for Adaptive AI Systems [0.11844977816228043]
Internal reasoning processes are crucial for developing AI systems capable of sustained adaptation in dynamic real-world environments.<n>We propose a human-inspired continuous learning framework that unifies reasoning, action, reflection, and verification within a sequential reasoning model.
arXiv Detail & Related papers (2026-02-12T03:19:04Z) - ToolSelf: Unifying Task Execution and Self-Reconfiguration via Tool-Driven Intrinsic Adaptation [60.25542764389203]
Agentic systems powered by Large Language Models (LLMs) have demonstrated remarkable potential in tackling complex, long-horizon tasks.<n>Existing approaches, relying on manual orchestration or runtime-based patches, often struggle with poor generalization and fragmented optimization.<n>We propose ToolSelf, a novel paradigm enabling tool-driven self-readjustment.
arXiv Detail & Related papers (2026-02-08T09:27:18Z) - Active Thinking Model: A Goal-Directed Self-Improving Framework for Real-World Adaptive Intelligence [0.11844977816228043]
We propose a unified cognitive framework that integrates goal reasoning, dynamic task generation, and self-reflective learning into an adaptive architecture.<n>A mathematically grounded theoretical analysis demonstrates that ATM can autonomously evolve from suboptimal to optimal behavior without external supervision.
arXiv Detail & Related papers (2025-11-02T01:13:12Z) - Activation Function Design Sustains Plasticity in Continual Learning [1.618563064839635]
In continual learning, models can progressively lose the ability to adapt.<n>We show that activation choice is a primary, architecture-agnostic lever for mitigating plasticity loss.
arXiv Detail & Related papers (2025-09-26T16:41:47Z) - Understanding Learning Dynamics Through Structured Representations [1.7244210453129227]
This paper investigates how internal structural choices shape the behavior of learning systems.<n>We analyze how these structures influence gradient flow, spectral sensitivity, and fixed-point behavior.<n>Rather than prescribing fixed templates, we emphasize principles of tractable design that can steer learning behavior in interpretable ways.
arXiv Detail & Related papers (2025-08-04T07:15:57Z) - Dynamic Manipulation of Deformable Objects in 3D: Simulation, Benchmark and Learning Strategy [88.8665000676562]
Prior methods often simplify the problem to low-speed or 2D settings, limiting their applicability to real-world 3D tasks.<n>To mitigate data scarcity, we introduce a novel simulation framework and benchmark grounded in reduced-order dynamics.<n>We propose Dynamics Informed Diffusion Policy (DIDP), a framework that integrates imitation pretraining with physics-informed test-time adaptation.
arXiv Detail & Related papers (2025-05-23T03:28:25Z) - Allostatic Control of Persistent States in Spiking Neural Networks for perception and computation [79.16635054977068]
We introduce a novel model for updating perceptual beliefs about the environment by extending the concept of Allostasis to the control of internal representations.<n>In this paper, we focus on an application in numerical cognition, where a bump of activity in an attractor network is used as a spatial numerical representation.
arXiv Detail & Related papers (2025-03-20T12:28:08Z) - Self-Healing Machine Learning: A Framework for Autonomous Adaptation in Real-World Environments [50.310636905746975]
Real-world machine learning systems often encounter model performance degradation due to distributional shifts in the underlying data generating process.
Existing approaches to addressing shifts, such as concept drift adaptation, are limited by their reason-agnostic nature.
We propose self-healing machine learning (SHML) to overcome these limitations.
arXiv Detail & Related papers (2024-10-31T20:05:51Z) - Tracking Emotions: Intrinsic Motivation Grounded on Multi-Level
Prediction Error Dynamics [68.8204255655161]
We discuss how emotions arise when differences between expected and actual rates of progress towards a goal are experienced.
We present an intrinsic motivation architecture that generates behaviors towards self-generated and dynamic goals.
arXiv Detail & Related papers (2020-07-29T06:53:13Z) - Euclideanizing Flows: Diffeomorphic Reduction for Learning Stable
Dynamical Systems [74.80320120264459]
We present an approach to learn such motions from a limited number of human demonstrations.
The complex motions are encoded as rollouts of a stable dynamical system.
The efficacy of this approach is demonstrated through validation on an established benchmark as well demonstrations collected on a real-world robotic system.
arXiv Detail & Related papers (2020-05-27T03:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.