Efficient Reservoir Management through Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2012.03822v1
- Date: Mon, 7 Dec 2020 16:13:05 GMT
- Title: Efficient Reservoir Management through Deep Reinforcement Learning
- Authors: Xinrun Wang, Tarun Nair, Haoyang Li, Yuh Sheng Reuben Wong, Nachiket
Kelkar, Srinivas Vaidyanathan, Rajat Nayak, Bo An, Jagdish Krishnaswamy,
Milind Tambe
- Abstract summary: We leverage reinforcement learning (RL) methods to compute efficient dam operation guidelines.
Specifically, we build offline simulators with real data and different mathematical models for the upstream inflow.
Experiments show that the simulator with DLM can efficiently model the inflow dynamics in the upstream and the dam operation policies trained by RL algorithms significantly outperform the human-generated policy.
- Score: 36.89242259597806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dams impact downstream river dynamics through flow regulation and disruption
of upstream-downstream linkages. However, current dam operation is far from
satisfactory due to the inability to respond the complicated and uncertain
dynamics of the upstream-downstream system and various usages of the reservoir.
Even further, the unsatisfactory dam operation can cause floods in downstream
areas. Therefore, we leverage reinforcement learning (RL) methods to compute
efficient dam operation guidelines in this work. Specifically, we build offline
simulators with real data and different mathematical models for the upstream
inflow, i.e., generalized least square (GLS) and dynamic linear model (DLM),
then use the simulator to train the state-of-the-art RL algorithms, including
DDPG, TD3 and SAC. Experiments show that the simulator with DLM can efficiently
model the inflow dynamics in the upstream and the dam operation policies
trained by RL algorithms significantly outperform the human-generated policy.
Related papers
- Comparison of Generative Learning Methods for Turbulence Modeling [1.2499537119440245]
High resolution techniques such as Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES) are generally not computationally affordable.
Recent advances in machine learning, specifically in generative probabilistic models, offer promising alternatives for turbulence modeling.
This paper investigates the application of three generative models - Variational Autoencoders (VAE), Deep Conversaal Generative Adversarial Networks (DCGAN), and Denoising Diffusion Probabilistic Models (DDPM)
arXiv Detail & Related papers (2024-11-25T14:20:53Z) - Autonomous Vehicle Controllers From End-to-End Differentiable Simulation [60.05963742334746]
We propose a differentiable simulator and design an analytic policy gradients (APG) approach to training AV controllers.
Our proposed framework brings the differentiable simulator into an end-to-end training loop, where gradients of environment dynamics serve as a useful prior to help the agent learn a more grounded policy.
We find significant improvements in performance and robustness to noise in the dynamics, as well as overall more intuitive human-like handling.
arXiv Detail & Related papers (2024-09-12T11:50:06Z) - CtRL-Sim: Reactive and Controllable Driving Agents with Offline Reinforcement Learning [38.63187494867502]
CtRL-Sim is a method that leverages return-conditioned offline reinforcement learning (RL) to efficiently generate reactive and controllable traffic agents.
We show that CtRL-Sim can generate realistic safety-critical scenarios while providing fine-grained control over agent behaviours.
arXiv Detail & Related papers (2024-03-29T02:10:19Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - Robust Path Following on Rivers Using Bootstrapped Reinforcement
Learning [0.0]
This paper develops a Deep Reinforcement Learning (DRL)-agent for navigation and control of autonomous surface vessels (ASV) on inland waterways.
A state-of-the-art bootstrapped Q-learning algorithm in combination with a versatile training environment generator leads to a robust and accurate rudder controller.
arXiv Detail & Related papers (2023-03-24T07:21:27Z) - Let Offline RL Flow: Training Conservative Agents in the Latent Space of
Normalizing Flows [58.762959061522736]
offline reinforcement learning aims to train a policy on a pre-recorded and fixed dataset without any additional environment interactions.
We build upon recent works on learning policies in latent action spaces and use a special form of Normalizing Flows for constructing a generative model.
We evaluate our method on various locomotion and navigation tasks, demonstrating that our approach outperforms recently proposed algorithms.
arXiv Detail & Related papers (2022-11-20T21:57:10Z) - Development and Validation of an AI-Driven Model for the La Rance Tidal
Barrage: A Generalisable Case Study [2.485182034310303]
An AI-Driven model representation of the La Rance tidal barrage was developed using novel parametrisation and Deep Reinforcement Learning techniques.
Results were validated with experimental measurements, yielding the first Tidal Range Structure (TRS) model validated against a constructed tidal barrage.
arXiv Detail & Related papers (2022-02-10T22:02:52Z) - A Workflow for Offline Model-Free Robotic Reinforcement Learning [117.07743713715291]
offline reinforcement learning (RL) enables learning control policies by utilizing only prior experience, without any online interaction.
We develop a practical workflow for using offline RL analogous to the relatively well-understood for supervised learning problems.
We demonstrate the efficacy of this workflow in producing effective policies without any online tuning.
arXiv Detail & Related papers (2021-09-22T16:03:29Z) - A Gradient-based Deep Neural Network Model for Simulating Multiphase
Flow in Porous Media [1.5791732557395552]
We describe a gradient-based deep neural network (GDNN) constrained by the physics related to multiphase flow in porous media.
We demonstrate that GDNN can effectively predict the nonlinear patterns of subsurface responses.
arXiv Detail & Related papers (2021-04-30T02:14:00Z) - Dynamic Mode Decomposition in Adaptive Mesh Refinement and Coarsening
Simulations [58.720142291102135]
Dynamic Mode Decomposition (DMD) is a powerful data-driven method used to extract coherent schemes.
This paper proposes a strategy to enable DMD to extract from observations with different mesh topologies and dimensions.
arXiv Detail & Related papers (2021-04-28T22:14:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.