DeepThermal: Combustion Optimization for Thermal Power Generating Units
Using Offline Reinforcement Learning
- URL: http://arxiv.org/abs/2102.11492v2
- Date: Wed, 24 Feb 2021 04:05:07 GMT
- Title: DeepThermal: Combustion Optimization for Thermal Power Generating Units
Using Offline Reinforcement Learning
- Authors: Xianyuan Zhan, Haoran Xu, Yue Zhang, Yusen Huo, Xiangyu Zhu, Honglei
Yin, Yu Zheng
- Abstract summary: We develop a new data-driven AI system, namely DeepThermal, to optimize the combustion control strategy for thermal power generating units.
At its core is a new model-based offline reinforcement learning framework, called MORE.
More aims at simultaneously improving the long-term reward (increase combustion efficiency and reduce pollutant emission) and controlling operational risks (safety constraints satisfaction)
- Score: 25.710523867709664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Thermal power generation plays a dominant role in the world's electricity
supply. It consumes large amounts of coal worldwide, and causes serious air
pollution. Optimizing the combustion efficiency of a thermal power generating
unit (TPGU) is a highly challenging and critical task in the energy industry.
We develop a new data-driven AI system, namely DeepThermal, to optimize the
combustion control strategy for TPGUs. At its core, is a new model-based
offline reinforcement learning (RL) framework, called MORE, which leverages
logged historical operational data of a TPGU to solve a highly complex
constrained Markov decision process problem via purely offline training. MORE
aims at simultaneously improving the long-term reward (increase combustion
efficiency and reduce pollutant emission) and controlling operational risks
(safety constraints satisfaction). In DeepThermal, we first learn a data-driven
combustion process simulator from the offline dataset. The RL agent of MORE is
then trained by combining real historical data as well as carefully filtered
and processed simulation data through a novel restrictive exploration scheme.
DeepThermal has been successfully deployed in four large coal-fired thermal
power plants in China. Real-world experiments show that DeepThermal effectively
improves the combustion efficiency of a TPGU. We also report and demonstrate
the superior performance of MORE by comparing with the state-of-the-art
algorithms on the standard offline RL benchmarks. To the best knowledge of the
authors, DeepThermal is the first AI application that has been used to solve
real-world complex mission-critical control tasks using the offline RL
approach.
Related papers
- D5RL: Diverse Datasets for Data-Driven Deep Reinforcement Learning [99.33607114541861]
We propose a new benchmark for offline RL that focuses on realistic simulations of robotic manipulation and locomotion environments.
Our proposed benchmark covers state-based and image-based domains, and supports both offline RL and online fine-tuning evaluation.
arXiv Detail & Related papers (2024-08-15T22:27:00Z) - Go Beyond Black-box Policies: Rethinking the Design of Learning Agent
for Interpretable and Verifiable HVAC Control [3.326392645107372]
We overcome the bottleneck by redesigning HVAC controllers using decision trees extracted from thermal dynamics models and historical data.
Our method saves 68.4% more energy and increases human comfort gain by 14.8% compared to the state-of-the-art method.
arXiv Detail & Related papers (2024-02-29T22:42:23Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - Optimal Scheduling in IoT-Driven Smart Isolated Microgrids Based on Deep
Reinforcement Learning [10.924928763380624]
We investigate the scheduling issue of diesel generators (DGs) in an Internet of Things-Driven microgrid (MG) by deep reinforcement learning (DRL)
The DRL agent learns an optimal policy from history renewable and load data of previous days.
The goal is to reduce operating cost on the premise of ensuring supply-demand balance.
arXiv Detail & Related papers (2023-04-28T23:52:50Z) - Sustainable AIGC Workload Scheduling of Geo-Distributed Data Centers: A
Multi-Agent Reinforcement Learning Approach [48.18355658448509]
Recent breakthroughs in generative artificial intelligence have triggered a surge in demand for machine learning training, which poses significant cost burdens and environmental challenges due to its substantial energy consumption.
Scheduling training jobs among geographically distributed cloud data centers unveils the opportunity to optimize the usage of computing capacity powered by inexpensive and low-carbon energy.
We propose an algorithm based on multi-agent reinforcement learning and actor-critic methods to learn the optimal collaborative scheduling strategy through interacting with a cloud system built with real-life workload patterns, energy prices, and carbon intensities.
arXiv Detail & Related papers (2023-04-17T02:12:30Z) - Efficient Learning of Voltage Control Strategies via Model-based Deep
Reinforcement Learning [9.936452412191326]
This article proposes a model-based deep reinforcement learning (DRL) method to design emergency control strategies for short-term voltage stability problems in power systems.
Recent advances show promising results in model-free DRL-based methods for power systems, but model-free methods suffer from poor sample efficiency and training time.
We propose a novel model-based-DRL framework where a deep neural network (DNN)-based dynamic surrogate model is utilized with the policy learning framework.
arXiv Detail & Related papers (2022-12-06T02:50:53Z) - Low Emission Building Control with Zero-Shot Reinforcement Learning [70.70479436076238]
Control via Reinforcement Learning (RL) has been shown to significantly improve building energy efficiency.
We show it is possible to obtain emission-reducing policies without a priori--a paradigm we call zero-shot building control.
arXiv Detail & Related papers (2022-08-12T17:13:25Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - Towards Optimal District Heating Temperature Control in China with Deep
Reinforcement Learning [0.0]
We build a recurrent neural network, trained on simulated data, to predict the indoor temperatures.
This model is then used to train two DRL agents, with or without expert guidance, for the optimal control of the supply water temperature.
arXiv Detail & Related papers (2020-12-17T11:16:08Z) - Critic Regularized Regression [70.8487887738354]
We propose a novel offline RL algorithm to learn policies from data using a form of critic-regularized regression (CRR)
We find that CRR performs surprisingly well and scales to tasks with high-dimensional state and action spaces.
arXiv Detail & Related papers (2020-06-26T17:50:26Z) - Dynamic Energy Dispatch Based on Deep Reinforcement Learning in
IoT-Driven Smart Isolated Microgrids [8.623472323825556]
Microgrids (MGs) are small, local power grids that can operate independently from the larger utility grid.
This paper focuses on deep reinforcement learning (DRL)-based energy dispatch for IoT-driven smart isolated MGs.
Two novel DRL algorithms are proposed to derive energy dispatch policies with and without fully observable state information.
arXiv Detail & Related papers (2020-02-07T01:44:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.