Meta-learning Structure-Preserving Dynamics
- URL: http://arxiv.org/abs/2508.11205v1
- Date: Fri, 15 Aug 2025 04:30:27 GMT
- Title: Meta-learning Structure-Preserving Dynamics
- Authors: Cheng Jing, Uvini Balasuriya Mudiyanselage, Woojin Cho, Minju Jo, Anthony Gruber, Kookjin Lee,
- Abstract summary: We introduce a modulation-based meta-learning framework that conditions structure-preserving models on compact latent representations of potentially unknown system parameters.<n>We enable scalable and generalizable learning across parametric families of dynamical systems.
- Score: 6.088897644268474
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Structure-preserving approaches to dynamics modeling have demonstrated great potential for modeling physical systems due to their strong inductive biases that enforce conservation laws and dissipative behavior. However, the resulting models are typically trained for fixed system configurations, requiring explicit knowledge of system parameters as well as costly retraining for each new set of parameters -- a major limitation in many-query or parameter-varying scenarios. Meta-learning offers a potential solution, but existing approaches like optimization-based meta-learning often suffer from training instability or limited generalization capability. Inspired by ideas from computer vision, we introduce a modulation-based meta-learning framework that directly conditions structure-preserving models on compact latent representations of potentially unknown system parameters, avoiding the need for gray-box system knowledge and explicit optimization during adaptation. Through the application of novel modulation strategies to parametric energy-conserving and dissipative systems, we enable scalable and generalizable learning across parametric families of dynamical systems. Experiments on standard benchmark problems demonstrate that our approach achieves accurate predictions in few-shot learning settings, without compromising on the essential physical constraints necessary for dynamical stability and effective generalization performance across parameter space.
Related papers
- SALAAD: Sparse And Low-Rank Adaptation via ADMM for Large Language Model Inference [38.037874715181964]
We propose SALAAD, a plug-and-play framework that induces sparse and low-rank structures during training.<n>Experiments across model scales show that SALAAD substantially reduces memory consumption during deployment.<n>A single training run yields a continuous spectrum of model capacities, enabling smooth and elastic deployment across diverse memory budgets.
arXiv Detail & Related papers (2026-02-01T00:00:11Z) - Rethinking the Role of Dynamic Sparse Training for Scalable Deep Reinforcement Learning [58.533203990515034]
Scaling neural networks has driven breakthrough advances in machine learning, yet this paradigm fails in deep reinforcement learning (DRL)<n>We show that dynamic sparse training strategies provide module-specific benefits that complement the primary scalability foundation established by architectural improvements.<n>We finally distill these insights into Module-Specific Training (MST), a practical framework that exploits the benefits of architectural improvements and demonstrates substantial scalability gains across diverse RL algorithms without algorithmic modifications.
arXiv Detail & Related papers (2025-10-14T03:03:08Z) - From Physics to Machine Learning and Back: Part II - Learning and Observational Bias in PHM [52.64097278841485]
Review examines how incorporating learning and observational biases through physics-informed modeling and data strategies can guide models toward physically consistent and reliable predictions.<n>Fast adaptation methods including meta-learning and few-shot learning are reviewed alongside domain generalization techniques.
arXiv Detail & Related papers (2025-09-25T14:15:43Z) - Designing Robust Software Sensors for Nonlinear Systems via Neural Networks and Adaptive Sliding Mode Control [2.884893167166808]
This paper presents a novel approach to designing software sensors for nonlinear dynamical systems.<n>Unlike traditional model-based observers that rely on explicit transformations or linearization, the proposed framework integrates neural networks with adaptive Sliding Mode Control (SMC)<n>The training methodology leverages the system's governing equations as a physics-based constraint, enabling observer synthesis without access to ground-truth state trajectories.
arXiv Detail & Related papers (2025-07-09T13:06:58Z) - Beyond Static Models: Hypernetworks for Adaptive and Generalizable Forecasting in Complex Parametric Dynamical Systems [0.0]
We introduce the Parametric Hypernetwork for Learning Interpolated Networks (PHLieNet)<n>PHLieNet simultaneously learns a global mapping from the parameter space to a nonlinear embedding and a mapping from the inferred embedding to the weights of a dynamics propagation network.<n>By interpolating in the space of models rather than observations, PHLieNet facilitates smooth transitions across parameterized system behaviors.
arXiv Detail & Related papers (2025-06-24T13:22:49Z) - In-Context Learning for Gradient-Free Receiver Adaptation: Principles, Applications, and Theory [54.92893355284945]
Deep learning-based wireless receivers offer the potential to dynamically adapt to varying channel environments.<n>Current adaptation strategies, including joint training, hypernetwork-based methods, and meta-learning, either demonstrate limited flexibility or necessitate explicit optimization through gradient descent.<n>This paper presents gradient-free adaptation techniques rooted in the emerging paradigm of in-context learning (ICL)
arXiv Detail & Related papers (2025-06-18T06:43:55Z) - Dynamic Manipulation of Deformable Objects in 3D: Simulation, Benchmark and Learning Strategy [88.8665000676562]
Prior methods often simplify the problem to low-speed or 2D settings, limiting their applicability to real-world 3D tasks.<n>To mitigate data scarcity, we introduce a novel simulation framework and benchmark grounded in reduced-order dynamics.<n>We propose Dynamics Informed Diffusion Policy (DIDP), a framework that integrates imitation pretraining with physics-informed test-time adaptation.
arXiv Detail & Related papers (2025-05-23T03:28:25Z) - Manifold meta-learning for reduced-complexity neural system identification [1.0276024900942875]
We propose a meta-learning framework that discovers a low-dimensional manifold.<n>This manifold is learned from a meta-dataset of input-output sequences generated by a class of related dynamical systems.<n>Unlike bilevel meta-learning approaches, our method employs an auxiliary neural network to map datasets directly onto the learned manifold.
arXiv Detail & Related papers (2025-04-16T06:49:56Z) - CalFuse: Feature Calibration Enhanced Parameter Fusion for Class-Continual Learning [12.022673345835688]
Class-Continual Learning (CCL) enables models to continuously learn new class knowledge while retaining previous classes.<n>Traditional CCL methods rely on visual features, which limits their effectiveness in complex, multimodal scenarios.<n>We propose CalFuse, a framework for enhanced parameter Fusion, which dynamic knowledge fusion.
arXiv Detail & Related papers (2025-03-24T13:44:12Z) - Meta-Learning for Physically-Constrained Neural System Identification [9.417562391585076]
We present a gradient-based meta-learning framework for rapid adaptation of neural state-space models (NSSMs) for black-box system identification.<n>We show that the meta-learned models result in improved downstream performance in model-based state estimation in indoor localization and energy systems.
arXiv Detail & Related papers (2025-01-10T18:46:28Z) - Mamba-FSCIL: Dynamic Adaptation with Selective State Space Model for Few-Shot Class-Incremental Learning [115.79349923044663]
Few-shot class-incremental learning (FSCIL) aims to incrementally learn novel classes from limited examples.<n>Existing methods face a critical dilemma: static architectures rely on a fixed parameter space to learn from data that arrive sequentially, prone to overfitting to the current session.<n>In this study, we explore the potential of Selective State Space Models (SSMs) for FSCIL.
arXiv Detail & Related papers (2024-07-08T17:09:39Z) - Active Learning of Discrete-Time Dynamics for Uncertainty-Aware Model Predictive Control [46.81433026280051]
We present a self-supervised learning approach that actively models the dynamics of nonlinear robotic systems.
Our approach showcases high resilience and generalization capabilities by consistently adapting to unseen flight conditions.
arXiv Detail & Related papers (2022-10-23T00:45:05Z) - Structure-Preserving Learning Using Gaussian Processes and Variational
Integrators [62.31425348954686]
We propose the combination of a variational integrator for the nominal dynamics of a mechanical system and learning residual dynamics with Gaussian process regression.
We extend our approach to systems with known kinematic constraints and provide formal bounds on the prediction uncertainty.
arXiv Detail & Related papers (2021-12-10T11:09:29Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.