AI-Driven approach for sustainable extraction of earth's subsurface renewable energy while minimizing seismic activity
- URL: http://arxiv.org/abs/2408.03664v2
- Date: Tue, 5 Nov 2024 15:27:04 GMT
- Title: AI-Driven approach for sustainable extraction of earth's subsurface renewable energy while minimizing seismic activity
- Authors: Diego Gutierrez-Oribio, Alexandros Stathas, Ioannis Stefanou,
- Abstract summary: Injection of fluids into the Earth's crust can induce or trigger earthquakes.
We propose a new approach based on Reinforcement Learning for the control of human-induced seismicity.
We show that the reinforcement learning algorithm can interact efficiently with a robust controller.
- Score: 44.99833362998488
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep Geothermal Energy, Carbon Capture and Storage, and Hydrogen Storage hold considerable promise for meeting the energy sector's large-scale requirements and reducing CO$_2$ emissions. However, the injection of fluids into the Earth's crust, essential for these activities, can induce or trigger earthquakes. In this paper, we highlight a new approach based on Reinforcement Learning for the control of human-induced seismicity in the highly complex environment of an underground reservoir. This complex system poses significant challenges in the control design due to parameter uncertainties and unmodeled dynamics. We show that the reinforcement learning algorithm can interact efficiently with a robust controller, by choosing the controller parameters in real-time, reducing human-induced seismicity and allowing the consideration of further production objectives, \textit{e.g.}, minimal control power. Simulations are presented for a simplified underground reservoir under various energy demand scenarios, demonstrating the reliability and effectiveness of the proposed control-reinforcement learning approach.
Related papers
- Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - Efficient machine-learning surrogates for large-scale geological carbon
and energy storage [0.276240219662896]
We propose a specialized machine-learning (ML) model to manage extensive reservoir models efficiently.
We've developed a method to reduce the training cost for deep neural operator models, using domain decomposition and a topology embedder.
This approach allows accurate predictions within the model's domain, even for untrained data, enhancing ML efficiency for large-scale geological storage applications.
arXiv Detail & Related papers (2023-10-11T13:05:03Z) - Low Emission Building Control with Zero-Shot Reinforcement Learning [70.70479436076238]
Control via Reinforcement Learning (RL) has been shown to significantly improve building energy efficiency.
We show it is possible to obtain emission-reducing policies without a priori--a paradigm we call zero-shot building control.
arXiv Detail & Related papers (2022-08-12T17:13:25Z) - Robust Model-based Reinforcement Learning for Autonomous Greenhouse
Control [9.022924636907412]
reinforcement learning (RL) algorithms can surpass human beings' decision-making and can be seamlessly integrated into the closed-loop control framework.
In this paper, we present a model-based robust RL framework for autonomous greenhouse control to meet the sample efficiency and safety challenges.
arXiv Detail & Related papers (2021-08-26T08:27:10Z) - Development of a Soft Actor Critic Deep Reinforcement Learning Approach
for Harnessing Energy Flexibility in a Large Office Building [0.0]
This research is concerned with the novel application and investigation of Soft Actor Critic' (SAC) based Deep Reinforcement Learning (DRL)
SAC is a model-free DRL technique that is able to handle continuous action spaces.
arXiv Detail & Related papers (2021-04-25T10:33:35Z) - Multitask machine learning of collective variables for enhanced sampling
of rare events [9.632096602077919]
A data-driven machine learning algorithm is devised to learn collective variables with a neural network.
The resulting latent space is shown to be an effective low-dimensional representation.
This approach is successfully applied to model systems including a 5D M"uller Brown model, a 5D three-well model, and alanine dipeptide in vacuum.
arXiv Detail & Related papers (2020-12-07T18:40:18Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Targeted free energy estimation via learned mappings [66.20146549150475]
Free energy perturbation (FEP) was proposed by Zwanzig more than six decades ago as a method to estimate free energy differences.
FEP suffers from a severe limitation: the requirement of sufficient overlap between distributions.
One strategy to mitigate this problem, called Targeted Free Energy Perturbation, uses a high-dimensional mapping in configuration space to increase overlap.
arXiv Detail & Related papers (2020-02-12T11:10:00Z) - NeurOpt: Neural network based optimization for building energy
management and climate control [58.06411999767069]
We propose a data-driven control algorithm based on neural networks to reduce this cost of model identification.
We validate our learning and control algorithms on a two-story building with ten independently controlled zones, located in Italy.
arXiv Detail & Related papers (2020-01-22T00:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.