Regularized Soft Actor-Critic for Behavior Transfer Learning
- URL: http://arxiv.org/abs/2209.13224v1
- Date: Tue, 27 Sep 2022 07:52:04 GMT
- Title: Regularized Soft Actor-Critic for Behavior Transfer Learning
- Authors: Mingxi Tan, Andong Tian, Ludovic Denoyer
- Abstract summary: Existing imitation learning methods mainly focus on making an agent effectively mimic a demonstrated behavior.
We propose a method called Regularized Soft Actor-Critic which formulates the main task and the imitation task.
We evaluate our method on continuous control tasks relevant to video games applications.
- Score: 10.519534498340482
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing imitation learning methods mainly focus on making an agent
effectively mimic a demonstrated behavior, but do not address the potential
contradiction between the behavior style and the objective of a task. There is
a general lack of efficient methods that allow an agent to partially imitate a
demonstrated behavior to varying degrees, while completing the main objective
of a task. In this paper we propose a method called Regularized Soft
Actor-Critic which formulates the main task and the imitation task under the
Constrained Markov Decision Process framework (CMDP). The main task is defined
as the maximum entropy objective used in Soft Actor-Critic (SAC) and the
imitation task is defined as a constraint. We evaluate our method on continuous
control tasks relevant to video games applications.
Related papers
- Guided Reinforcement Learning for Robust Multi-Contact Loco-Manipulation [12.377289165111028]
Reinforcement learning (RL) often necessitates a meticulous Markov Decision Process (MDP) design tailored to each task.
This work proposes a systematic approach to behavior synthesis and control for multi-contact loco-manipulation tasks.
We define a task-independent MDP to train RL policies using only a single demonstration per task generated from a model-based trajectory.
arXiv Detail & Related papers (2024-10-17T17:46:27Z) - Active Fine-Tuning of Generalist Policies [54.65568433408307]
We propose AMF (Active Multi-task Fine-tuning) to maximize multi-task policy performance under a limited demonstration budget.
We derive performance guarantees for AMF under regularity assumptions and demonstrate its empirical effectiveness in complex and high-dimensional environments.
arXiv Detail & Related papers (2024-10-07T13:26:36Z) - Code Models are Zero-shot Precondition Reasoners [83.8561159080672]
We use code representations to reason about action preconditions for sequential decision making tasks.
We propose a precondition-aware action sampling strategy that ensures actions predicted by a policy are consistent with preconditions.
arXiv Detail & Related papers (2023-11-16T06:19:27Z) - On the Value of Myopic Behavior in Policy Reuse [67.37788288093299]
Leveraging learned strategies in unfamiliar scenarios is fundamental to human intelligence.
In this work, we present a framework called Selective Myopic bEhavior Control(SMEC)
SMEC adaptively aggregates the sharable short-term behaviors of prior policies and the long-term behaviors of the task policy, leading to coordinated decisions.
arXiv Detail & Related papers (2023-05-28T03:59:37Z) - Leveraging Sequentiality in Reinforcement Learning from a Single
Demonstration [68.94506047556412]
We propose to leverage a sequential bias to learn control policies for complex robotic tasks using a single demonstration.
We show that DCIL-II can solve with unprecedented sample efficiency some challenging simulated tasks such as humanoid locomotion and stand-up.
arXiv Detail & Related papers (2022-11-09T10:28:40Z) - Direct Behavior Specification via Constrained Reinforcement Learning [12.679780444702573]
CMDPs can be adapted to solve goal-based tasks while adhering to a set of behavioral constraints.
We evaluate this framework on a set of continuous control tasks relevant to the application of Reinforcement Learning for NPC design in video games.
arXiv Detail & Related papers (2021-12-22T21:12:28Z) - Reinforcement Learning With Sparse-Executing Actions via Sparsity Regularization [15.945378631406024]
Reinforcement learning (RL) has demonstrated impressive performance in decision-making tasks like embodied control, autonomous driving and financial trading.
In many decision-making tasks, the agents often encounter the problem of executing actions under limited budgets.
This paper formalizes the problem as a Sparse Action Markov Decision Process (SA-MDP), in which specific actions in the action space can only be executed for a limited time.
We propose a policy optimization algorithm, Action Sparsity REgularization (ASRE), which adaptively handles each action with a distinct preference.
arXiv Detail & Related papers (2021-05-18T16:50:42Z) - Domain-Robust Visual Imitation Learning with Mutual Information
Constraints [0.0]
We introduce a new algorithm called Disentangling Generative Adversarial Imitation Learning (DisentanGAIL)
Our algorithm enables autonomous agents to learn directly from high dimensional observations of an expert performing a task.
arXiv Detail & Related papers (2021-03-08T21:18:58Z) - Automatic Curriculum Learning through Value Disagreement [95.19299356298876]
Continually solving new, unsolved tasks is the key to learning diverse behaviors.
In the multi-task domain, where an agent needs to reach multiple goals, the choice of training goals can largely affect sample efficiency.
We propose setting up an automatic curriculum for goals that the agent needs to solve.
We evaluate our method across 13 multi-goal robotic tasks and 5 navigation tasks, and demonstrate performance gains over current state-of-the-art methods.
arXiv Detail & Related papers (2020-06-17T03:58:25Z) - Goal-Conditioned End-to-End Visuomotor Control for Versatile Skill
Primitives [89.34229413345541]
We propose a conditioning scheme which avoids pitfalls by learning the controller and its conditioning in an end-to-end manner.
Our model predicts complex action sequences based directly on a dynamic image representation of the robot motion.
We report significant improvements in task success over representative MPC and IL baselines.
arXiv Detail & Related papers (2020-03-19T15:04:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.