Optimal control towards sustainable wastewater treatment plants based on
multi-agent reinforcement learning
- URL: http://arxiv.org/abs/2008.10417v3
- Date: Wed, 14 Apr 2021 08:04:28 GMT
- Title: Optimal control towards sustainable wastewater treatment plants based on
multi-agent reinforcement learning
- Authors: Kehua Chen, Hongcheng Wang, Borja Valverde-Perezc, Siyuan Zhai, Luca
Vezzaro, Aijie Wang
- Abstract summary: This study used a novel technique, multi-agent deep reinforcement learning, to optimize dissolved oxygen and chemical dosage in a WWTP.
The result shows that optimization based on LCA has lower environmental impacts compared to baseline scenario.
The cost-oriented control strategy exhibits comparable overall performance to the LCA driven strategy.
- Score: 1.0765359420035392
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Wastewater treatment plants are designed to eliminate pollutants and
alleviate environmental pollution. However, the construction and operation of
WWTPs consume resources, emit greenhouse gases (GHGs) and produce residual
sludge, thus require further optimization. WWTPs are complex to control and
optimize because of high nonlinearity and variation. This study used a novel
technique, multi-agent deep reinforcement learning, to simultaneously optimize
dissolved oxygen and chemical dosage in a WWTP. The reward function was
specially designed from life cycle perspective to achieve sustainable
optimization. Five scenarios were considered: baseline, three different
effluent quality and cost-oriented scenarios. The result shows that
optimization based on LCA has lower environmental impacts compared to baseline
scenario, as cost, energy consumption and greenhouse gas emissions reduce to
0.890 CNY/m3-ww, 0.530 kWh/m3-ww, 2.491 kg CO2-eq/m3-ww respectively. The
cost-oriented control strategy exhibits comparable overall performance to the
LCA driven strategy since it sacrifices environmental bene ts but has lower
cost as 0.873 CNY/m3-ww. It is worth mentioning that the retrofitting of WWTPs
based on resources should be implemented with the consideration of impact
transfer. Specifically, LCA SW scenario decreases 10 kg PO4-eq in
eutrophication potential compared to the baseline within 10 days, while
significantly increases other indicators. The major contributors of each
indicator are identified for future study and improvement. Last, the author
discussed that novel dynamic control strategies required advanced sensors or a
large amount of data, so the selection of control strategies should also
consider economic and ecological conditions.
Related papers
- Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - Revisiting Plasticity in Visual Reinforcement Learning: Data, Modules and Training Stages [56.98243487769916]
Plasticity, the ability of a neural network to evolve with new data, is crucial for high-performance and sample-efficient visual reinforcement learning.
We propose Adaptive RR, which dynamically adjusts the replay ratio based on the critic's plasticity level.
arXiv Detail & Related papers (2023-10-11T12:05:34Z) - Towards Green AI in Fine-tuning Large Language Models via Adaptive
Backpropagation [58.550710456745726]
Fine-tuning is the most effective way of adapting pre-trained large language models (LLMs) to downstream applications.
Existing techniques on efficient fine-tuning can only achieve limited reduction of such FLOPs.
We present GreenTrainer, a new technique that adaptively evaluates different tensors' backpropagation costs and contributions to the fine-tuned model accuracy.
arXiv Detail & Related papers (2023-09-22T21:55:18Z) - Exploring sustainable pathways for urban traffic decarbonization:
vehicle technologies, management strategies, and driving behaviour [5.172508424953869]
This research conducts a comprehensive micro-simulation of traffic and emissions in downtown Toronto, Canada.
To achieve this, transformers-based prediction models accurately forecast Greenhouse Gas (GHG) and Nitrogen Oxides (NOx) emissions.
The study finds that 100% battery electric vehicles have the lowest GHG emissions, showing their potential as a sustainable transportation solution.
arXiv Detail & Related papers (2023-08-28T22:17:36Z) - A Comparative Study of Machine Learning Algorithms for Anomaly Detection
in Industrial Environments: Performance and Environmental Impact [62.997667081978825]
This study seeks to address the demands of high-performance machine learning models with environmental sustainability.
Traditional machine learning algorithms, such as Decision Trees and Random Forests, demonstrate robust efficiency and performance.
However, superior outcomes were obtained with optimised configurations, albeit with a commensurate increase in resource consumption.
arXiv Detail & Related papers (2023-07-01T15:18:00Z) - PLASTIC: Improving Input and Label Plasticity for Sample Efficient
Reinforcement Learning [54.409634256153154]
In Reinforcement Learning (RL), enhancing sample efficiency is crucial.
In principle, off-policy RL algorithms can improve sample efficiency by allowing multiple updates per environment interaction.
Our study investigates the underlying causes of this phenomenon by dividing plasticity into two aspects.
arXiv Detail & Related papers (2023-06-19T06:14:51Z) - A SWAT-based Reinforcement Learning Framework for Crop Management [0.0]
We introduce a reinforcement learning (RL) environment that leverages the dynamics in the Soil and Water Assessment Tool (SWAT)
This drastically saves time and resources that would have been otherwise deployed during a full-growing season.
We demonstrate the utility of our framework by developing and benchmarking various decision-making agents following management strategies informed by standard farming practices and state-of-the-art RL algorithms.
arXiv Detail & Related papers (2023-02-10T00:24:22Z) - Automated deep reinforcement learning for real-time scheduling strategy
of multi-energy system integrated with post-carbon and direct-air carbon
captured system [4.721325160754968]
The adoption of CDRT is not economically viable at the current carbon price.
The proposed DRL agent can meet the prosumers' multi-energy demand and schedule the CDRT energy demand economically.
The configuration with PCCS and solid-sorbent DACS is considered the most suitable.
arXiv Detail & Related papers (2023-01-18T20:22:44Z) - A deep reinforcement learning model for predictive maintenance planning
of road assets: Integrating LCA and LCCA [0.0]
This research proposes a framework using Reinforcement Learning (RL) to determine type and timing of M&R practices.
The results propose a 20-year M&R plan in which road condition remains in an excellent condition range.
Decision-makers and transportation agencies can use this scheme to conduct better maintenance practices that can prevent budget waste and minimize the environmental impacts.
arXiv Detail & Related papers (2021-12-20T13:46:39Z) - Estimating air quality co-benefits of energy transition using machine
learning [5.758035706324685]
Estimating health benefits of reducing fossil fuel use from improved air quality provides important rationales for carbon emissions abatement.
We develop a novel and succinct machine learning framework that is able to provide precise and robust annual average fine particle (PM2.5) concentration estimations.
Our findings prompt careful policy designs to maximize cost-effectiveness in the transition towards a carbon-neutral energy system.
arXiv Detail & Related papers (2021-05-29T14:52:57Z) - Estimating Crop Primary Productivity with Sentinel-2 and Landsat 8 using
Machine Learning Methods Trained with Radiative Transfer Simulations [58.17039841385472]
We take advantage of all parallel developments in mechanistic modeling and satellite data availability for advanced monitoring of crop productivity.
Our model successfully estimates gross primary productivity across a variety of C3 crop types and environmental conditions even though it does not use any local information from the corresponding sites.
This highlights its potential to map crop productivity from new satellite sensors at a global scale with the help of current Earth observation cloud computing platforms.
arXiv Detail & Related papers (2020-12-07T16:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.