Controlling dynamics of stochastic systems with deep reinforcement learning
- URL: http://arxiv.org/abs/2502.18111v1
- Date: Tue, 25 Feb 2025 11:28:12 GMT
- Title: Controlling dynamics of stochastic systems with deep reinforcement learning
- Authors: Ruslan Mukhamadiarov,
- Abstract summary: We propose a simulation algorithm that allows achieving control of the dynamics of systems through the use of trained artificial neural networks.<n>Specifically, we use agent-based simulations where the neural network plays the role of the controller that drives local state-to-state transitions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A properly designed controller can help improve the quality of experimental measurements or force a dynamical system to follow a completely new time-evolution path. Recent developments in deep reinforcement learning have made steep advances toward designing effective control schemes for fairly complex systems. However, a general simulation scheme that employs deep reinforcement learning for exerting control in stochastic systems is yet to be established. In this paper, we attempt to further bridge a gap between control theory and deep reinforcement learning by proposing a simulation algorithm that allows achieving control of the dynamics of stochastic systems through the use of trained artificial neural networks. Specifically, we use agent-based simulations where the neural network plays the role of the controller that drives local state-to-state transitions. We demonstrate the workflow and the effectiveness of the proposed control methods by considering the following two stochastic processes: particle coalescence on a lattice and a totally asymmetric exclusion process.
Related papers
- Model-based controller assisted domain randomization in deep reinforcement learning: application to nonlinear powertrain control [0.0]
This study proposes a new robust control approach using the framework of deep reinforcement learning (DRL)
The problem setup is modeled via the latent Markov decision process (LMDP), a set of vanilla MDPs, for a controlled system subject to uncertainties and nonlinearities.
Compared to traditional DRL-based controls, the proposed controller design is smarter in that we can achieve a high level of generalization ability.
arXiv Detail & Related papers (2025-04-28T12:09:07Z) - Dropout MPC: An Ensemble Neural MPC Approach for Systems with Learned Dynamics [0.0]
We propose a novel sampling-based ensemble neural MPC algorithm that employs the Monte-Carlo dropout technique on the learned system model.
The method aims in general at uncertain systems with complex dynamics, where models derived from first principles are hard to infer.
arXiv Detail & Related papers (2024-06-04T17:15:25Z) - Distributed Robust Learning based Formation Control of Mobile Robots based on Bioinspired Neural Dynamics [14.149584412213269]
We first introduce a distributed estimator using a variable structure and cascaded design technique, eliminating the need for derivative information to improve the real time performance.
Then, a kinematic tracking control method is developed utilizing a bioinspired neural dynamic-based approach aimed at providing smooth control inputs and effectively resolving the speed jump issue.
To address the challenges for robots operating with completely unknown dynamics and disturbances, a learning-based robust dynamic controller is developed.
arXiv Detail & Related papers (2024-03-23T04:36:12Z) - Decentralized Event-Triggered Online Learning for Safe Consensus of
Multi-Agent Systems with Gaussian Process Regression [3.405252606286664]
This paper presents a novel learning-based distributed control law, augmented by an auxiliary dynamics.
For continuous enhancement in predictive performance, a data-efficient online learning strategy with a decentralized event-triggered mechanism is proposed.
To demonstrate the efficacy of the proposed learning-based controller, a comparative analysis is conducted, contrasting it with both conventional distributed control laws and offline learning methodologies.
arXiv Detail & Related papers (2024-02-05T16:41:17Z) - Controlling quantum many-body systems using reduced-order modelling [0.0]
We propose an efficient approach for solving a class of control problems for many-body quantum systems.
Simulating dynamics of such a reduced-order model, viewed as a digital twin" of the original subsystem, is significantly more efficient.
Our results will find direct applications in the study of many-body systems, in probing non-trivial quasiparticle properties, as well as in development control tools for quantum computing devices.
arXiv Detail & Related papers (2022-11-01T13:58:44Z) - Active Learning of Discrete-Time Dynamics for Uncertainty-Aware Model Predictive Control [46.81433026280051]
We present a self-supervised learning approach that actively models the dynamics of nonlinear robotic systems.
Our approach showcases high resilience and generalization capabilities by consistently adapting to unseen flight conditions.
arXiv Detail & Related papers (2022-10-23T00:45:05Z) - Guaranteed Conservation of Momentum for Learning Particle-based Fluid
Dynamics [96.9177297872723]
We present a novel method for guaranteeing linear momentum in learned physics simulations.
We enforce conservation of momentum with a hard constraint, which we realize via antisymmetrical continuous convolutional layers.
In combination, the proposed method allows us to increase the physical accuracy of the learned simulator substantially.
arXiv Detail & Related papers (2022-10-12T09:12:59Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z) - Model-Free Control of Dynamical Systems with Deep Reservoir Computing [0.0]
We propose and demonstrate a nonlinear control method that can be applied to unknown, complex systems.
Our technique requires no prior knowledge of the system and is thus model-free.
Reservoir computers are well-suited to the control problem because they require small training data sets and remarkably low training times.
arXiv Detail & Related papers (2020-10-05T18:59:51Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - Online Reinforcement Learning Control by Direct Heuristic Dynamic
Programming: from Time-Driven to Event-Driven [80.94390916562179]
Time-driven learning refers to the machine learning method that updates parameters in a prediction model continuously as new data arrives.
It is desirable to prevent the time-driven dHDP from updating due to insignificant system event such as noise.
We show how the event-driven dHDP algorithm works in comparison to the original time-driven dHDP.
arXiv Detail & Related papers (2020-06-16T05:51:25Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.