Personalized Path Recourse for Reinforcement Learning Agents
- URL: http://arxiv.org/abs/2312.08724v3
- Date: Sun, 03 Nov 2024 09:16:54 GMT
- Title: Personalized Path Recourse for Reinforcement Learning Agents
- Authors: Dat Hong, Tong Wang,
- Abstract summary: The goal is to edit a given path of actions to achieve desired goals while ensuring a high similarity to the agent's original path.
We train a personalized recourse agent to generate such personalized paths.
The proposed method is applicable to both reinforcement learning and supervised learning settings.
- Score: 4.768286204382179
- License:
- Abstract: This paper introduces Personalized Path Recourse, a novel method that generates recourse paths for a reinforcement learning agent. The goal is to edit a given path of actions to achieve desired goals (e.g., better outcomes compared to the agent's original path) while ensuring a high similarity to the agent's original paths and being personalized to the agent. Personalization refers to the extent to which the new path is tailored to the agent's observed behavior patterns from their policy function. We train a personalized recourse agent to generate such personalized paths, which are obtained using reward functions that consider the goal, similarity, and personalization. The proposed method is applicable to both reinforcement learning and supervised learning settings for correcting or improving sequences of actions or sequences of data to achieve a pre-determined goal. The method is evaluated in various settings. Experiments show that our model not only recourses for a better outcome but also adapts to different agents' behavior.
Related papers
- From Novice to Expert: LLM Agent Policy Optimization via Step-wise Reinforcement Learning [62.54484062185869]
We introduce StepAgent, which utilizes step-wise reward to optimize the agent's reinforcement learning process.
We propose implicit-reward and inverse reinforcement learning techniques to facilitate agent reflection and policy adjustment.
arXiv Detail & Related papers (2024-11-06T10:35:11Z) - Transfer Reinforcement Learning in Heterogeneous Action Spaces using Subgoal Mapping [9.81076530822611]
We propose a method that learns a subgoal mapping between the expert agent policy and the learner agent policy.
We learn this subgoal mapping by training a Long Short Term Memory (LSTM) network for a distribution of tasks.
We demonstrate that the proposed learning scheme can effectively find the subgoal mapping underlying the given distribution of tasks.
arXiv Detail & Related papers (2024-10-18T14:08:41Z) - A State Augmentation based approach to Reinforcement Learning from Human
Preferences [20.13307800821161]
Preference Based Reinforcement Learning attempts to solve the issue by utilizing binary feedbacks on queried trajectory pairs.
We present a state augmentation technique that allows the agent's reward model to be robust.
arXiv Detail & Related papers (2023-02-17T07:10:50Z) - A Fully Controllable Agent in the Path Planning using Goal-Conditioned
Reinforcement Learning [0.0]
In the path planning, the routes may vary depending on the number of variables such as that it is important for the agent to reach various goals.
I propose a novel reinforcement learning framework for a fully controllable agent in the path planning.
arXiv Detail & Related papers (2022-05-20T05:18:03Z) - Explaining Reinforcement Learning Policies through Counterfactual
Trajectories [147.7246109100945]
A human developer must validate that an RL agent will perform well at test-time.
Our method conveys how the agent performs under distribution shifts by showing the agent's behavior across a wider trajectory distribution.
In a user study, we demonstrate that our method enables users to score better than baseline methods on one of two agent validation tasks.
arXiv Detail & Related papers (2022-01-29T00:52:37Z) - Contrastive Explanations for Comparing Preferences of Reinforcement
Learning Agents [16.605295052893986]
In complex tasks where the reward function is not straightforward, multiple reinforcement learning (RL) policies can be trained by adjusting the impact of individual objectives on reward function.
In this work we compare behavior of two policies trained on the same task, but with different preferences in objectives.
We propose a method for distinguishing between differences in behavior that stem from different abilities from those that are a consequence of opposing preferences of two RL agents.
arXiv Detail & Related papers (2021-12-17T11:57:57Z) - You Mostly Walk Alone: Analyzing Feature Attribution in Trajectory
Prediction [52.442129609979794]
Recent deep learning approaches for trajectory prediction show promising performance.
It remains unclear which features such black-box models actually learn to use for making predictions.
This paper proposes a procedure that quantifies the contributions of different cues to model performance.
arXiv Detail & Related papers (2021-10-11T14:24:15Z) - Adversarial Intrinsic Motivation for Reinforcement Learning [60.322878138199364]
We investigate whether the Wasserstein-1 distance between a policy's state visitation distribution and a target distribution can be utilized effectively for reinforcement learning tasks.
Our approach, termed Adversarial Intrinsic Motivation (AIM), estimates this Wasserstein-1 distance through its dual objective and uses it to compute a supplemental reward function.
arXiv Detail & Related papers (2021-05-27T17:51:34Z) - Outcome-Driven Reinforcement Learning via Variational Inference [95.82770132618862]
We discuss a new perspective on reinforcement learning, recasting it as the problem of inferring actions that achieve desired outcomes, rather than a problem of maximizing rewards.
To solve the resulting outcome-directed inference problem, we establish a novel variational inference formulation that allows us to derive a well-shaped reward function.
We empirically demonstrate that this method eliminates the need to design reward functions and leads to effective goal-directed behaviors.
arXiv Detail & Related papers (2021-04-20T18:16:21Z) - Automatic Curriculum Learning through Value Disagreement [95.19299356298876]
Continually solving new, unsolved tasks is the key to learning diverse behaviors.
In the multi-task domain, where an agent needs to reach multiple goals, the choice of training goals can largely affect sample efficiency.
We propose setting up an automatic curriculum for goals that the agent needs to solve.
We evaluate our method across 13 multi-goal robotic tasks and 5 navigation tasks, and demonstrate performance gains over current state-of-the-art methods.
arXiv Detail & Related papers (2020-06-17T03:58:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.