Inverse Reinforcement Learning a Control Lyapunov Approach
- URL: http://arxiv.org/abs/2104.04483v1
- Date: Fri, 9 Apr 2021 17:08:16 GMT
- Title: Inverse Reinforcement Learning a Control Lyapunov Approach
- Authors: Samuel Tesfazgi, Armin Lederer and Sandra Hirche
- Abstract summary: In this work, we reformulate the IRL inference problem to learning control Lyapunov functions from demonstrations.
We show the flexibility of our proposed method by learning from goal-directed movement demonstrations in a continuous environment.
- Score: 8.996358964203298
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inferring the intent of an intelligent agent from demonstrations and
subsequently predicting its behavior, is a critical task in many collaborative
settings. A common approach to solve this problem is the framework of inverse
reinforcement learning (IRL), where the observed agent, e.g., a human
demonstrator, is assumed to behave according to an intrinsic cost function that
reflects its intent and informs its control actions. In this work, we
reformulate the IRL inference problem to learning control Lyapunov functions
(CLF) from demonstrations by exploiting the inverse optimality property, which
states that every CLF is also a meaningful value function. Moreover, the
derived CLF formulation directly guarantees stability of inferred control
policies. We show the flexibility of our proposed method by learning from
goal-directed movement demonstrations in a continuous environment.
Related papers
- Guided Reinforcement Learning for Robust Multi-Contact Loco-Manipulation [12.377289165111028]
Reinforcement learning (RL) often necessitates a meticulous Markov Decision Process (MDP) design tailored to each task.
This work proposes a systematic approach to behavior synthesis and control for multi-contact loco-manipulation tasks.
We define a task-independent MDP to train RL policies using only a single demonstration per task generated from a model-based trajectory.
arXiv Detail & Related papers (2024-10-17T17:46:27Z) - A Unifying Framework for Action-Conditional Self-Predictive Reinforcement Learning [48.59516337905877]
Learning a good representation is a crucial challenge for Reinforcement Learning (RL) agents.
Recent work has developed theoretical insights into these algorithms.
We take a step towards bridging the gap between theory and practice by analyzing an action-conditional self-predictive objective.
arXiv Detail & Related papers (2024-06-04T07:22:12Z) - Inverse-RLignment: Inverse Reinforcement Learning from Demonstrations for LLM Alignment [62.05713042908654]
We introduce Alignment from Demonstrations (AfD), a novel approach leveraging high-quality demonstration data to overcome these challenges.
We formalize AfD within a sequential decision-making framework, highlighting its unique challenge of missing reward signals.
Practically, we propose a computationally efficient algorithm that extrapolates over a tailored reward model for AfD.
arXiv Detail & Related papers (2024-05-24T15:13:53Z) - Stable Inverse Reinforcement Learning: Policies from Control Lyapunov Landscapes [4.229902091180109]
We propose a novel, stability-certified IRL approach to learning control Lyapunov functions from demonstrations data.
By exploiting closed-form expressions for associated control policies, we are able to efficiently search the space of CLFs.
We present a theoretical analysis of the optimality properties provided by the CLF and evaluate our approach using both simulated and real-world data.
arXiv Detail & Related papers (2024-05-14T16:40:45Z) - Actively Learning Reinforcement Learning: A Stochastic Optimal Control Approach [3.453622106101339]
We propose a framework towards achieving two intertwined objectives: (i) equipping reinforcement learning with active exploration and deliberate information gathering, and (ii) overcoming the computational intractability of optimal control law.
We approach both objectives by using reinforcement learning to compute the optimal control law.
Unlike fixed exploration and exploitation balance, caution and probing are employed automatically by the controller in real-time, even after the learning process is terminated.
arXiv Detail & Related papers (2023-09-18T18:05:35Z) - Discovering Individual Rewards in Collective Behavior through Inverse
Multi-Agent Reinforcement Learning [3.4437947384641032]
We introduce an off-policy inverse multi-agent reinforcement learning algorithm (IMARL)
By leveraging demonstrations, our algorithm automatically uncovers the reward function and learns an effective policy for the agents.
The proposed IMARL algorithm is a significant step towards understanding collective dynamics from the perspective of its constituents.
arXiv Detail & Related papers (2023-05-17T20:07:30Z) - D-Shape: Demonstration-Shaped Reinforcement Learning via Goal
Conditioning [48.57484755946714]
D-Shape is a new method for combining imitation learning (IL) and reinforcement learning (RL)
This paper introduces D-Shape, a new method for combining IL and RL that uses ideas from reward shaping and goal-conditioned RL to resolve the above conflict.
We experimentally validate D-Shape in sparse-reward gridworld domains, showing that it both improves over RL in terms of sample efficiency and converges consistently to the optimal policy.
arXiv Detail & Related papers (2022-10-26T02:28:32Z) - Imitating, Fast and Slow: Robust learning from demonstrations via
decision-time planning [96.72185761508668]
Planning at Test-time (IMPLANT) is a new meta-algorithm for imitation learning.
We demonstrate that IMPLANT significantly outperforms benchmark imitation learning approaches on standard control environments.
arXiv Detail & Related papers (2022-04-07T17:16:52Z) - Residual Reinforcement Learning from Demonstrations [51.56457466788513]
Residual reinforcement learning (RL) has been proposed as a way to solve challenging robotic tasks by adapting control actions from a conventional feedback controller to maximize a reward signal.
We extend the residual formulation to learn from visual inputs and sparse rewards using demonstrations.
Our experimental evaluation on simulated manipulation tasks on a 6-DoF UR5 arm and a 28-DoF dexterous hand demonstrates that residual RL from demonstrations is able to generalize to unseen environment conditions more flexibly than either behavioral cloning or RL fine-tuning.
arXiv Detail & Related papers (2021-06-15T11:16:49Z) - Efficient Empowerment Estimation for Unsupervised Stabilization [75.32013242448151]
empowerment principle enables unsupervised stabilization of dynamical systems at upright positions.
We propose an alternative solution based on a trainable representation of a dynamical system as a Gaussian channel.
We show that our method has a lower sample complexity, is more stable in training, possesses the essential properties of the empowerment function, and allows estimation of empowerment from images.
arXiv Detail & Related papers (2020-07-14T21:10:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.