Gym-preCICE: Reinforcement Learning Environments for Active Flow Control
- URL: http://arxiv.org/abs/2305.02033v1
- Date: Wed, 3 May 2023 10:54:56 GMT
- Title: Gym-preCICE: Reinforcement Learning Environments for Active Flow Control
- Authors: Mosayeb Shams, Ahmed H. Elsheikh
- Abstract summary: Gym-preCICE is a Python adapter fully compliant with Gymnasium (formerly known as OpenAI Gym) API.
Gym-preCICE takes advantage of preCICE, an open-source coupling library for partitioned multi-physics simulations.
The framework results in a seamless integration of realistic physics-based simulation toolboxes with RL algorithms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Active flow control (AFC) involves manipulating fluid flow over time to
achieve a desired performance or efficiency. AFC, as a sequential optimisation
task, can benefit from utilising Reinforcement Learning (RL) for dynamic
optimisation. In this work, we introduce Gym-preCICE, a Python adapter fully
compliant with Gymnasium (formerly known as OpenAI Gym) API to facilitate
designing and developing RL environments for single- and multi-physics AFC
applications. In an actor-environment setting, Gym-preCICE takes advantage of
preCICE, an open-source coupling library for partitioned multi-physics
simulations, to handle information exchange between a controller (actor) and an
AFC simulation environment. The developed framework results in a seamless
non-invasive integration of realistic physics-based simulation toolboxes with
RL algorithms. Gym-preCICE provides a framework for designing RL environments
to model AFC tasks, as well as a playground for applying RL algorithms in
various AFC-related engineering applications.
Related papers
- A Multi-Agent Reinforcement Learning Testbed for Cognitive Radio Applications [0.48182159227299676]
Radio Frequency Reinforcement Learning (RFRL) will play a prominent role in the wireless communication systems of the future.
This paper provides an overview of the updated RFRL Gym environment.
arXiv Detail & Related papers (2024-10-28T20:45:52Z) - Enhancing Spectrum Efficiency in 6G Satellite Networks: A GAIL-Powered Policy Learning via Asynchronous Federated Inverse Reinforcement Learning [67.95280175998792]
A novel adversarial imitation learning (GAIL)-powered policy learning approach is proposed for optimizing beamforming, spectrum allocation, and remote user equipment (RUE) association ins.
We employ inverse RL (IRL) to automatically learn reward functions without manual tuning.
We show that the proposed MA-AL method outperforms traditional RL approaches, achieving a $14.6%$ improvement in convergence and reward value.
arXiv Detail & Related papers (2024-09-27T13:05:02Z) - Flextron: Many-in-One Flexible Large Language Model [85.93260172698398]
We introduce Flextron, a network architecture and post-training model optimization framework supporting flexible model deployment.
We present a sample-efficient training method and associated routing algorithms for transforming an existing trained LLM into a Flextron model.
We demonstrate superior performance over multiple end-to-end trained variants and other state-of-the-art elastic networks, all with a single pretraining run that consumes a mere 7.63% tokens compared to original pretraining.
arXiv Detail & Related papers (2024-06-11T01:16:10Z) - Orchestration of Emulator Assisted Mobile Edge Tuning for AI Foundation
Models: A Multi-Agent Deep Reinforcement Learning Approach [10.47302625959368]
We present a groundbreaking paradigm integrating Mobile Edge Computing with foundation models, specifically designed to enhance local task performance on user equipment (UE)
Central to our approach is the innovative Emulator-Adapter architecture, segmenting the foundation model into two cohesive modules.
We introduce an advanced resource allocation mechanism that is fine-tuned to the needs of the Emulator-Adapter structure in decentralized settings.
arXiv Detail & Related papers (2023-10-26T15:47:51Z) - In Situ Framework for Coupling Simulation and Machine Learning with
Application to CFD [51.04126395480625]
Recent years have seen many successful applications of machine learning (ML) to facilitate fluid dynamic computations.
As simulations grow, generating new training datasets for traditional offline learning creates I/O and storage bottlenecks.
This work offers a solution by simplifying this coupling and enabling in situ training and inference on heterogeneous clusters.
arXiv Detail & Related papers (2023-06-22T14:07:54Z) - Karolos: An Open-Source Reinforcement Learning Framework for Robot-Task
Environments [0.3867363075280544]
In reinforcement learning (RL) research, simulations enable benchmarks between algorithms.
In this paper, we introduce Karolos, a framework developed for robotic applications.
The code is open source and published on GitHub with the aim of promoting research of RL applications in robotics.
arXiv Detail & Related papers (2022-12-01T23:14:02Z) - CaiRL: A High-Performance Reinforcement Learning Environment Toolkit [9.432068833600884]
CaiRL Environment Toolkit is an efficient, compatible, and more sustainable alternative for training learning agents.
We demonstrate the effectiveness of CaiRL in the classic control benchmark, comparing the execution speed to OpenAI Gym.
arXiv Detail & Related papers (2022-10-03T21:24:04Z) - Energy-Efficient Multi-Orchestrator Mobile Edge Learning [54.28419430315478]
Mobile Edge Learning (MEL) is a collaborative learning paradigm that features distributed training of Machine Learning (ML) models over edge devices.
In MEL, possible coexistence of multiple learning tasks with different datasets may arise.
We propose lightweight algorithms that can achieve near-optimal performance and facilitate the trade-offs between energy consumption, accuracy, and solution complexity.
arXiv Detail & Related papers (2021-09-02T07:37:10Z) - PlasticineLab: A Soft-Body Manipulation Benchmark with Differentiable
Physics [89.81550748680245]
We introduce a new differentiable physics benchmark called PasticineLab.
In each task, the agent uses manipulators to deform the plasticine into the desired configuration.
We evaluate several existing reinforcement learning (RL) methods and gradient-based methods on this benchmark.
arXiv Detail & Related papers (2021-04-07T17:59:23Z) - Deluca -- A Differentiable Control Library: Environments, Methods, and
Benchmarking [52.44199258132215]
We present an open-source library of differentiable physics and robotics environments.
The library features several popular environments, including classical control settings from OpenAI Gym.
We give several use-cases of new scientific results obtained using the library.
arXiv Detail & Related papers (2021-02-19T15:06:47Z) - Sim-Env: Decoupling OpenAI Gym Environments from Simulation Models [0.0]
Reinforcement learning (RL) is one of the most active fields of AI research.
Development methodology still lags behind, with a severe lack of standard APIs to foster the development of RL applications.
We present a workflow and tools for the decoupled development and maintenance of multi-purpose agent-based models and derived single-purpose reinforcement learning environments.
arXiv Detail & Related papers (2021-02-19T09:25:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.