Symmetry reduction for deep reinforcement learning active control of
chaotic spatiotemporal dynamics
- URL: http://arxiv.org/abs/2104.05437v1
- Date: Fri, 9 Apr 2021 17:55:12 GMT
- Title: Symmetry reduction for deep reinforcement learning active control of
chaotic spatiotemporal dynamics
- Authors: Kevin Zeng, Michael D. Graham
- Abstract summary: Deep reinforcement learning (RL) is capable of discovering complex control strategies for macroscopic objectives in high-dimensional systems.
We show that by moving the deep RL problem to a symmetry-reduced space, we can alleviate limitations inherent in the naive application of deep RL.
We demonstrate that symmetry-reduced deep RL yields improved data efficiency as well as improved control policy efficacy compared to policies found by naive deep RL.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep reinforcement learning (RL) is a data-driven, model-free method capable
of discovering complex control strategies for macroscopic objectives in
high-dimensional systems, making its application towards flow control
promising. Many systems of flow control interest possess symmetries that, when
neglected, can significantly inhibit the learning and performance of a naive
deep RL approach. Using a test-bed consisting of the Kuramoto-Sivashinsky
Equation (KSE), equally spaced actuators, and a goal of minimizing dissipation
and power cost, we demonstrate that by moving the deep RL problem to a
symmetry-reduced space, we can alleviate limitations inherent in the naive
application of deep RL. We demonstrate that symmetry-reduced deep RL yields
improved data efficiency as well as improved control policy efficacy compared
to policies found by naive deep RL. Interestingly, the policy learned by the
the symmetry aware control agent drives the system toward an equilibrium state
of the forced KSE that is connected by continuation to an equilibrium of the
unforced KSE, despite having been given no explicit information regarding its
existence. I.e., to achieve its goal, the RL algorithm discovers and stabilizes
an equilibrium state of the system. Finally, we demonstrate that the
symmetry-reduced control policy is robust to observation and actuation signal
noise, as well as to system parameters it has not observed before.
Related papers
- SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning [5.59265003686955]
We introduce SINDy-RL, a framework for combining SINDy and deep reinforcement learning.
SINDy-RL achieves comparable performance to state-of-the-art DRL algorithms.
We demonstrate the effectiveness of our approaches on benchmark control environments and challenging fluids problems.
arXiv Detail & Related papers (2024-03-14T05:17:39Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - KCRL: Krasovskii-Constrained Reinforcement Learning with Guaranteed
Stability in Nonlinear Dynamical Systems [66.9461097311667]
We propose a model-based reinforcement learning framework with formal stability guarantees.
The proposed method learns the system dynamics up to a confidence interval using feature representation.
We show that KCRL is guaranteed to learn a stabilizing policy in a finite number of interactions with the underlying unknown system.
arXiv Detail & Related papers (2022-06-03T17:27:04Z) - Data-driven control of spatiotemporal chaos with reduced-order neural
ODE-based models and reinforcement learning [0.0]
Deep learning is capable of discovering complex control strategies for high-dimensional systems, making it promising for flow control applications.
A major challenge associated with RL is that substantial training data must be generated by repeatedly interacting with the target system.
We use a data-driven reduced-order model (ROM) in place the true system during RL training to efficiently estimate the optimal policy.
We show that the ROM-based control strategy translates well to the true KSE and highlight that the RL agent discovers and stabilizes an underlying forced equilibrium solution of the KSE system.
arXiv Detail & Related papers (2022-05-01T23:25:44Z) - Steady-State Error Compensation in Reference Tracking and Disturbance
Rejection Problems for Reinforcement Learning-Based Control [0.9023847175654602]
Reinforcement learning (RL) is a promising, upcoming topic in automatic control applications.
Initiative action state augmentation (IASA) for actor-critic-based RL controllers is introduced.
This augmentation does not require any expert knowledge, leaving the approach model free.
arXiv Detail & Related papers (2022-01-31T16:29:19Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z) - Uncertainty Weighted Actor-Critic for Offline Reinforcement Learning [63.53407136812255]
Offline Reinforcement Learning promises to learn effective policies from previously-collected, static datasets without the need for exploration.
Existing Q-learning and actor-critic based off-policy RL algorithms fail when bootstrapping from out-of-distribution (OOD) actions or states.
We propose Uncertainty Weighted Actor-Critic (UWAC), an algorithm that detects OOD state-action pairs and down-weights their contribution in the training objectives accordingly.
arXiv Detail & Related papers (2021-05-17T20:16:46Z) - Online Algorithms and Policies Using Adaptive and Machine Learning
Approaches [0.22020053359163297]
Two classes of nonlinear dynamic systems are considered, both of which are control-affine.
We propose a combination of a Reinforcement Learning based policy in the outer loop suitably chosen to ensure stability and optimality for the nominal dynamics.
In addition to establishing a stability guarantee with real-time control, the AC-RL controller is also shown to lead to parameter learning with persistent excitation.
arXiv Detail & Related papers (2021-05-13T22:51:25Z) - Robust Deep Reinforcement Learning against Adversarial Perturbations on
State Observations [88.94162416324505]
A deep reinforcement learning (DRL) agent observes its states through observations, which may contain natural measurement errors or adversarial noises.
Since the observations deviate from the true states, they can mislead the agent into making suboptimal actions.
We show that naively applying existing techniques on improving robustness for classification tasks, like adversarial training, is ineffective for many RL tasks.
arXiv Detail & Related papers (2020-03-19T17:59:59Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.