Multi-fidelity Reinforcement Learning Control for Complex Dynamical Systems
- URL: http://arxiv.org/abs/2504.05588v1
- Date: Tue, 08 Apr 2025 00:50:15 GMT
- Title: Multi-fidelity Reinforcement Learning Control for Complex Dynamical Systems
- Authors: Luning Sun, Xin-Yang Liu, Siyan Zhao, Aditya Grover, Jian-Xun Wang, Jayaraman J. Thiagarajan,
- Abstract summary: We propose a multi-fidelity reinforcement learning framework for controlling instabilities in complex systems.<n>The effect of the proposed framework is demonstrated on two complex dynamics in physics.
- Score: 42.2790464348673
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Controlling instabilities in complex dynamical systems is challenging in scientific and engineering applications. Deep reinforcement learning (DRL) has seen promising results for applications in different scientific applications. The many-query nature of control tasks requires multiple interactions with real environments of the underlying physics. However, it is usually sparse to collect from the experiments or expensive to simulate for complex dynamics. Alternatively, controlling surrogate modeling could mitigate the computational cost issue. However, a fast and accurate learning-based model by offline training makes it very hard to get accurate pointwise dynamics when the dynamics are chaotic. To bridge this gap, the current work proposes a multi-fidelity reinforcement learning (MFRL) framework that leverages differentiable hybrid models for control tasks, where a physics-based hybrid model is corrected by limited high-fidelity data. We also proposed a spectrum-based reward function for RL learning. The effect of the proposed framework is demonstrated on two complex dynamics in physics. The statistics of the MFRL control result match that computed from many-query evaluations of the high-fidelity environments and outperform other SOTA baselines.
Related papers
- Model-based controller assisted domain randomization in deep reinforcement learning: application to nonlinear powertrain control [0.0]
This study proposes a new robust control approach using the framework of deep reinforcement learning (DRL)
The problem setup is modeled via the latent Markov decision process (LMDP), a set of vanilla MDPs, for a controlled system subject to uncertainties and nonlinearities.
Compared to traditional DRL-based controls, the proposed controller design is smarter in that we can achieve a high level of generalization ability.
arXiv Detail & Related papers (2025-04-28T12:09:07Z) - Stabilizing Reinforcement Learning in Differentiable Multiphysics Simulation [11.360832156847103]
This paper presents a novel RL algorithm and a simulation platform to enable scaling RL on tasks involving rigid bodies and deformables.<n>We introduce Soft Analytic Policy (SAPO), a maximum entropy first-order model-based RL algorithm, which uses first-order analytic gradients to train an actor to maximize expected return and entropy.<n>We also develop Rewarped, a parallel differentiable multiphysics simulation platform that supports simulating various materials beyond rigid bodies.
arXiv Detail & Related papers (2024-12-16T18:56:24Z) - Learning System Dynamics without Forgetting [60.08612207170659]
We investigate the problem of Continual Dynamics Learning (CDL), examining task configurations and evaluating the applicability of existing techniques.<n>We propose the Mode-switching Graph ODE (MS-GODE) model, which integrates the strengths LG-ODE and sub-network learning with a mode-switching module.<n>We construct a novel benchmark of biological dynamic systems for CDL, Bio-CDL, featuring diverse systems with disparate dynamics.
arXiv Detail & Related papers (2024-06-30T14:55:18Z) - SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning [5.59265003686955]
We introduce SINDy-RL, a framework for combining SINDy and deep reinforcement learning.
SINDy-RL achieves comparable performance to state-of-the-art DRL algorithms.
We demonstrate the effectiveness of our approaches on benchmark control environments and challenging fluids problems.
arXiv Detail & Related papers (2024-03-14T05:17:39Z) - RL + Model-based Control: Using On-demand Optimal Control to Learn Versatile Legged Locomotion [16.800984476447624]
This paper presents a control framework that combines model-based optimal control and reinforcement learning.
We validate the robustness and controllability of the framework through a series of experiments.
Our framework effortlessly supports the training of control policies for robots with diverse dimensions.
arXiv Detail & Related papers (2023-05-29T01:33:55Z) - Learning a model is paramount for sample efficiency in reinforcement
learning control of PDEs [5.488334211013093]
We show that learning an actuated model in parallel to training the RL agent significantly reduces the total amount of required data sampled from the real system.
We also show that iteratively updating the model is of major importance to avoid biases in the RL training.
arXiv Detail & Related papers (2023-02-14T16:14:39Z) - Active Learning of Discrete-Time Dynamics for Uncertainty-Aware Model Predictive Control [46.81433026280051]
We present a self-supervised learning approach that actively models the dynamics of nonlinear robotic systems.
Our approach showcases high resilience and generalization capabilities by consistently adapting to unseen flight conditions.
arXiv Detail & Related papers (2022-10-23T00:45:05Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Real-time Neural-MPC: Deep Learning Model Predictive Control for
Quadrotors and Agile Robotic Platforms [59.03426963238452]
We present Real-time Neural MPC, a framework to efficiently integrate large, complex neural network architectures as dynamics models within a model-predictive control pipeline.
We show the feasibility of our framework on real-world problems by reducing the positional tracking error by up to 82% when compared to state-of-the-art MPC approaches without neural network dynamics.
arXiv Detail & Related papers (2022-03-15T09:38:15Z) - Learning physics-informed simulation models for soft robotic
manipulation: A case study with dielectric elastomer actuators [21.349079159359746]
Soft actuators offer a safe and adaptable approach to robotic tasks like gentle grasping and dexterous movement.
Creating accurate models to control such systems is challenging due to the complex physics of deformable materials.
This paper presents a framework that combines the advantages of differentiable simulator and Finite Element Method.
arXiv Detail & Related papers (2022-02-25T21:15:05Z) - PlasticineLab: A Soft-Body Manipulation Benchmark with Differentiable
Physics [89.81550748680245]
We introduce a new differentiable physics benchmark called PasticineLab.
In each task, the agent uses manipulators to deform the plasticine into the desired configuration.
We evaluate several existing reinforcement learning (RL) methods and gradient-based methods on this benchmark.
arXiv Detail & Related papers (2021-04-07T17:59:23Z) - Data-Efficient Learning for Complex and Real-Time Physical Problem
Solving using Augmented Simulation [49.631034790080406]
We present a task for navigating a marble to the center of a circular maze.
We present a model that learns to move a marble in the complex environment within minutes of interacting with the real system.
arXiv Detail & Related papers (2020-11-14T02:03:08Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.