A Generative Approach to Control Complex Physical Systems
- URL: http://arxiv.org/abs/2407.06494v1
- Date: Tue, 9 Jul 2024 01:56:23 GMT
- Title: A Generative Approach to Control Complex Physical Systems
- Authors: Long Wei, Peiyan Hu, Ruiqi Feng, Haodong Feng, Yixuan Du, Tao Zhang, Rui Wang, Yue Wang, Zhi-Ming Ma, Tailin Wu,
- Abstract summary: We introduce Diffusion Physical systems Control (DiffPhyCon), a new class of method to address the physical systems control problem.
DiffPhyCon excels by simultaneously minimizing both the learned generative energy function and the predefined control objectives.
We test our method in 1D Burgers' equation and 2D jellyfish movement control in a fluid environment.
- Score: 16.733151963652244
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Controlling the evolution of complex physical systems is a fundamental task across science and engineering. Classical techniques suffer from limited applicability or huge computational costs. On the other hand, recent deep learning and reinforcement learning-based approaches often struggle to optimize long-term control sequences under the constraints of system dynamics. In this work, we introduce Diffusion Physical systems Control (DiffPhyCon), a new class of method to address the physical systems control problem. DiffPhyCon excels by simultaneously minimizing both the learned generative energy function and the predefined control objectives across the entire trajectory and control sequence. Thus, it can explore globally and identify near-optimal control sequences. Moreover, we enhance DiffPhyCon with prior reweighting, enabling the discovery of control sequences that significantly deviate from the training distribution. We test our method in 1D Burgers' equation and 2D jellyfish movement control in a fluid environment. Our method outperforms widely applied classical approaches and state-of-the-art deep learning and reinforcement learning methods. Notably, DiffPhyCon unveils an intriguing fast-close-slow-open pattern observed in the jellyfish, aligning with established findings in the field of fluid dynamics.
Related papers
- Controlling quantum many-body systems using reduced-order modelling [0.0]
We propose an efficient approach for solving a class of control problems for many-body quantum systems.
Simulating dynamics of such a reduced-order model, viewed as a digital twin" of the original subsystem, is significantly more efficient.
Our results will find direct applications in the study of many-body systems, in probing non-trivial quasiparticle properties, as well as in development control tools for quantum computing devices.
arXiv Detail & Related papers (2022-11-01T13:58:44Z) - Guaranteed Conservation of Momentum for Learning Particle-based Fluid
Dynamics [96.9177297872723]
We present a novel method for guaranteeing linear momentum in learned physics simulations.
We enforce conservation of momentum with a hard constraint, which we realize via antisymmetrical continuous convolutional layers.
In combination, the proposed method allows us to increase the physical accuracy of the learned simulator substantially.
arXiv Detail & Related papers (2022-10-12T09:12:59Z) - Near-optimal control of dynamical systems with neural ordinary
differential equations [0.0]
Recent advances in deep learning and neural network-based optimization have contributed to the development of methods that can help solve control problems involving high-dimensional dynamical systems.
We first analyze how truncated and non-truncated backpropagation through time affect runtime performance and the ability of neural networks to learn optimal control functions.
arXiv Detail & Related papers (2022-06-22T14:11:11Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Learning by Doing: Controlling a Dynamical System using Causality,
Control, and Reinforcement Learning [27.564435351371653]
Questions in causality, control, and reinforcement learning go beyond the classical machine learning task of prediction.
We believe that combining the different views might create synergies and this competition is meant as a first step toward such synergies.
The goal in both tracks is to infer controls that drive the system to a desired state.
arXiv Detail & Related papers (2022-02-12T12:37:29Z) - Deep Reinforcement Learning for Online Control of Stochastic Partial
Differential Equations [10.746602033809943]
We formulate the problem of controlling partial differential equations as a reinforcement learning problem.
We present a learning-based, distributed control approach for online control of a system of SPDEs with high dimensional state-action space.
arXiv Detail & Related papers (2021-10-21T16:45:50Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z) - Deluca -- A Differentiable Control Library: Environments, Methods, and
Benchmarking [52.44199258132215]
We present an open-source library of differentiable physics and robotics environments.
The library features several popular environments, including classical control settings from OpenAI Gym.
We give several use-cases of new scientific results obtained using the library.
arXiv Detail & Related papers (2021-02-19T15:06:47Z) - Reinforcement Learning with Fast Stabilization in Linear Dynamical
Systems [91.43582419264763]
We study model-based reinforcement learning (RL) in unknown stabilizable linear dynamical systems.
We propose an algorithm that certifies fast stabilization of the underlying system by effectively exploring the environment.
We show that the proposed algorithm attains $tildemathcalO(sqrtT)$ regret after $T$ time steps of agent-environment interaction.
arXiv Detail & Related papers (2020-07-23T23:06:40Z) - Learning to Control PDEs with Differentiable Physics [102.36050646250871]
We present a novel hierarchical predictor-corrector scheme which enables neural networks to learn to understand and control complex nonlinear physical systems over long time frames.
We demonstrate that our method successfully develops an understanding of complex physical systems and learns to control them for tasks involving PDEs.
arXiv Detail & Related papers (2020-01-21T11:58:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.