Model-Based Reinforcement Learning Control of Reaction-Diffusion
Problems
- URL: http://arxiv.org/abs/2402.14446v1
- Date: Thu, 22 Feb 2024 11:06:07 GMT
- Title: Model-Based Reinforcement Learning Control of Reaction-Diffusion
Problems
- Authors: Christina Schenk, Aditya Vasudevan, Maciej Haranczyk, Ignacio Romero
- Abstract summary: reinforcement learning has been applied to decision-making in several applications, most notably in games.
We introduce two novel reward functions to drive the flow of the transported field.
Results show that certain controls can be implemented successfully in these applications.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Mathematical and computational tools have proven to be reliable in
decision-making processes. In recent times, in particular, machine
learning-based methods are becoming increasingly popular as advanced support
tools. When dealing with control problems, reinforcement learning has been
applied to decision-making in several applications, most notably in games. The
success of these methods in finding solutions to complex problems motivates the
exploration of new areas where they can be employed to overcome current
difficulties. In this paper, we explore the use of automatic control strategies
to initial boundary value problems in thermal and disease transport.
Specifically, in this work, we adapt an existing reinforcement learning
algorithm using a stochastic policy gradient method and we introduce two novel
reward functions to drive the flow of the transported field. The new
model-based framework exploits the interactions between a reaction-diffusion
model and the modified agent. The results show that certain controls can be
implemented successfully in these applications, although model simplifications
had to be assumed.
Related papers
- regAL: Python Package for Active Learning of Regression Problems [0.0]
Python package regAL allows users to evaluate different active learning strategies for regression problems.
We present our Python package regAL, which allows users to evaluate different active learning strategies for regression problems.
arXiv Detail & Related papers (2024-10-23T14:34:36Z) - Enhancing Multi-Step Reasoning Abilities of Language Models through Direct Q-Function Optimization [50.485788083202124]
Reinforcement Learning (RL) plays a crucial role in aligning large language models with human preferences and improving their ability to perform complex tasks.
We introduce Direct Q-function Optimization (DQO), which formulates the response generation process as a Markov Decision Process (MDP) and utilizes the soft actor-critic (SAC) framework to optimize a Q-function directly parameterized by the language model.
Experimental results on two math problem-solving datasets, GSM8K and MATH, demonstrate that DQO outperforms previous methods, establishing it as a promising offline reinforcement learning approach for aligning language models.
arXiv Detail & Related papers (2024-10-11T23:29:20Z) - Active learning for regression in engineering populations: A risk-informed approach [0.0]
Regression is a fundamental prediction task common in data-centric engineering applications.
Active learning is an approach for preferentially acquiring feature-label pairs in a resource-efficient manner.
It is shown that the proposed approach has superior performance in terms of expected cost -- maintaining predictive performance while reducing the number of inspections required.
arXiv Detail & Related papers (2024-09-06T15:03:42Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - Contrastive Example-Based Control [163.6482792040079]
We propose a method for offline, example-based control that learns an implicit model of multi-step transitions, rather than a reward function.
Across a range of state-based and image-based offline control tasks, our method outperforms baselines that use learned reward functions.
arXiv Detail & Related papers (2023-07-24T19:43:22Z) - Recent Developments in Machine Learning Methods for Stochastic Control
and Games [3.3993877661368757]
Recently, computational methods based on machine learning have been developed for solving control problems and games.
We focus on deep learning methods that have unlocked the possibility of solving such problems, even in high dimensions or when the structure is very complex.
This paper provides an introduction to these methods and summarizes the state-of-the-art works at the crossroad of machine learning and control and games.
arXiv Detail & Related papers (2023-03-17T21:53:07Z) - Reinforcement Learning in System Identification [0.0]
System identification, also known as learning forward models, transfer functions, system dynamics, etc., has a long tradition both in science and engineering.
Here we explore the use of Reinforcement Learning in this problem.
We elaborate on why and how this problem fits naturally and sound as a Reinforcement Learning problem, and present some experimental results that demonstrate RL is a promising technique to solve these kind of problems.
arXiv Detail & Related papers (2022-12-14T09:20:42Z) - Accelerated Policy Learning with Parallel Differentiable Simulation [59.665651562534755]
We present a differentiable simulator and a new policy learning algorithm (SHAC)
Our algorithm alleviates problems with local minima through a smooth critic function.
We show substantial improvements in sample efficiency and wall-clock time over state-of-the-art RL and differentiable simulation-based algorithms.
arXiv Detail & Related papers (2022-04-14T17:46:26Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Meta-Reinforcement Learning Robust to Distributional Shift via Model
Identification and Experience Relabeling [126.69933134648541]
We present a meta-reinforcement learning algorithm that is both efficient and extrapolates well when faced with out-of-distribution tasks at test time.
Our method is based on a simple insight: we recognize that dynamics models can be adapted efficiently and consistently with off-policy data.
arXiv Detail & Related papers (2020-06-12T13:34:46Z) - Model-based Multi-Agent Reinforcement Learning with Cooperative
Prioritized Sweeping [4.5497948012757865]
We present a new model-based reinforcement learning algorithm, Cooperative Prioritized Sweeping.
The algorithm allows for sample-efficient learning on large problems by exploiting a factorization to approximate the value function.
Our method outperforms the state-of-the-art algorithm sparse cooperative Q-learning algorithm, both on the well-known SysAdmin benchmark and randomized environments.
arXiv Detail & Related papers (2020-01-15T19:13:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.