TASAC: a twin-actor reinforcement learning framework with stochastic
policy for batch process control
- URL: http://arxiv.org/abs/2204.10685v1
- Date: Fri, 22 Apr 2022 13:00:51 GMT
- Title: TASAC: a twin-actor reinforcement learning framework with stochastic
policy for batch process control
- Authors: Tanuja Joshi, Hariprasad Kodamanaa, Harikumar Kandath, and Niket
Kaisare
- Abstract summary: Reinforcement Learning (RL) wherein an agent learns the policy by directly interacting with the environment, offers a potential alternative in this context.
RL frameworks with actor-critic architecture have recently become popular for controlling systems where state and action spaces are continuous.
It has been shown that an ensemble of actor and critic networks further helps the agent learn better policies due to the enhanced exploration due to simultaneous policy learning.
- Score: 1.101002667958165
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to their complex nonlinear dynamics and batch-to-batch variability, batch
processes pose a challenge for process control. Due to the absence of accurate
models and resulting plant-model mismatch, these problems become harder to
address for advanced model-based control strategies. Reinforcement Learning
(RL), wherein an agent learns the policy by directly interacting with the
environment, offers a potential alternative in this context. RL frameworks with
actor-critic architecture have recently become popular for controlling systems
where state and action spaces are continuous. It has been shown that an
ensemble of actor and critic networks further helps the agent learn better
policies due to the enhanced exploration due to simultaneous policy learning.
To this end, the current study proposes a stochastic actor-critic RL algorithm,
termed Twin Actor Soft Actor-Critic (TASAC), by incorporating an ensemble of
actors for learning, in a maximum entropy framework, for batch process control.
Related papers
- Entropy-Regularized Token-Level Policy Optimization for Language Agent Reinforcement [67.1393112206885]
Large Language Models (LLMs) have shown promise as intelligent agents in interactive decision-making tasks.
We introduce Entropy-Regularized Token-level Policy Optimization (ETPO), an entropy-augmented RL method tailored for optimizing LLMs at the token level.
We assess the effectiveness of ETPO within a simulated environment that models data science code generation as a series of multi-step interactive tasks.
arXiv Detail & Related papers (2024-02-09T07:45:26Z) - Action-Quantized Offline Reinforcement Learning for Robotic Skill
Learning [68.16998247593209]
offline reinforcement learning (RL) paradigm provides recipe to convert static behavior datasets into policies that can perform better than the policy that collected the data.
In this paper, we propose an adaptive scheme for action quantization.
We show that several state-of-the-art offline RL methods such as IQL, CQL, and BRAC improve in performance on benchmarks when combined with our proposed discretization scheme.
arXiv Detail & Related papers (2023-10-18T06:07:10Z) - Soft Decomposed Policy-Critic: Bridging the Gap for Effective Continuous
Control with Discrete RL [47.80205106726076]
We present the Soft Decomposed Policy-Critic (SDPC) architecture, which combines soft RL and actor-critic techniques with discrete RL methods to overcome this limitation.
SDPC discretizes each action dimension independently and employs a shared critic network to maximize the soft $Q$-function.
Our proposed approach outperforms state-of-the-art continuous RL algorithms in a variety of continuous control tasks, including Mujoco's Humanoid and Box2d's BiWalker.
arXiv Detail & Related papers (2023-08-20T08:32:11Z) - Solving Continuous Control via Q-learning [54.05120662838286]
We show that a simple modification of deep Q-learning largely alleviates issues with actor-critic methods.
By combining bang-bang action discretization with value decomposition, framing single-agent control as cooperative multi-agent reinforcement learning (MARL), this simple critic-only approach matches performance of state-of-the-art continuous actor-critic methods.
arXiv Detail & Related papers (2022-10-22T22:55:50Z) - Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels [112.63440666617494]
Reinforcement learning algorithms can succeed but require large amounts of interactions between the agent and the environment.
We propose a new method to solve it, using unsupervised model-based RL, for pre-training the agent.
We show robust performance on the Real-Word RL benchmark, hinting at resiliency to environment perturbations during adaptation.
arXiv Detail & Related papers (2022-09-24T14:22:29Z) - Learning Robust Policy against Disturbance in Transition Dynamics via
State-Conservative Policy Optimization [63.75188254377202]
Deep reinforcement learning algorithms can perform poorly in real-world tasks due to discrepancy between source and target environments.
We propose a novel model-free actor-critic algorithm to learn robust policies without modeling the disturbance in advance.
Experiments in several robot control tasks demonstrate that SCPO learns robust policies against the disturbance in transition dynamics.
arXiv Detail & Related papers (2021-12-20T13:13:05Z) - Curriculum Based Reinforcement Learning of Grid Topology Controllers to
Prevent Thermal Cascading [0.19116784879310028]
This paper describes how domain knowledge of power system operators can be integrated into reinforcement learning frameworks.
A curriculum-based approach with reward tuning is incorporated into the training procedure by modifying the environment.
A parallel training approach on multiple scenarios is employed to avoid biasing the agent to a few scenarios and make it robust to the natural variability in grid operations.
arXiv Detail & Related papers (2021-12-18T20:32:05Z) - Deep RL With Information Constrained Policies: Generalization in
Continuous Control [21.46148507577606]
We show that a natural constraint on information flow might confer onto artificial agents in continuous control tasks.
We implement a novel Capacity-Limited Actor-Critic (CLAC) algorithm.
Our experiments show that compared to alternative approaches, CLAC offers improvements in generalization between training and modified test environments.
arXiv Detail & Related papers (2020-10-09T15:42:21Z) - Learning Off-Policy with Online Planning [18.63424441772675]
We investigate a novel instantiation of H-step lookahead with a learned model and a terminal value function.
We show the flexibility of LOOP to incorporate safety constraints during deployment with a set of navigation environments.
arXiv Detail & Related papers (2020-08-23T16:18:44Z) - SUNRISE: A Simple Unified Framework for Ensemble Learning in Deep
Reinforcement Learning [102.78958681141577]
We present SUNRISE, a simple unified ensemble method, which is compatible with various off-policy deep reinforcement learning algorithms.
SUNRISE integrates two key ingredients: (a) ensemble-based weighted Bellman backups, which re-weight target Q-values based on uncertainty estimates from a Q-ensemble, and (b) an inference method that selects actions using the highest upper-confidence bounds for efficient exploration.
arXiv Detail & Related papers (2020-07-09T17:08:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.