Value of Assistance for Mobile Agents
- URL: http://arxiv.org/abs/2308.11961v1
- Date: Wed, 23 Aug 2023 07:02:57 GMT
- Title: Value of Assistance for Mobile Agents
- Authors: Adi Amuzig, David Dovrat and Sarah Keren
- Abstract summary: Mobile robotic agents often suffer from localization uncertainty which grows with time and with the agents' movement.
In some settings, it may be possible to perform assistive actions that reduce uncertainty about a robot's location.
We propose Value of Assistance (VOA) to represent the expected cost reduction that assistance will yield at a given point of execution.
- Score: 7.922832585855347
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mobile robotic agents often suffer from localization uncertainty which grows
with time and with the agents' movement. This can hinder their ability to
accomplish their task. In some settings, it may be possible to perform
assistive actions that reduce uncertainty about a robot's location. For
example, in a collaborative multi-robot system, a wheeled robot can request
assistance from a drone that can fly to its estimated location and reveal its
exact location on the map or accompany it to its intended location. Since
assistance may be costly and limited, and may be requested by different members
of a team, there is a need for principled ways to support the decision of which
assistance to provide to an agent and when, as well as to decide which agent to
help within a team. For this purpose, we propose Value of Assistance (VOA) to
represent the expected cost reduction that assistance will yield at a given
point of execution. We offer ways to compute VOA based on estimations of the
robot's future uncertainty, modeled as a Gaussian process. We specify
conditions under which our VOA measures are valid and empirically demonstrate
the ability of our measures to predict the agent's average cost reduction when
receiving assistance in both simulated and real-world robotic settings.
Related papers
- Pragmatic Instruction Following and Goal Assistance via Cooperative
Language-Guided Inverse Planning [52.91457780361305]
This paper introduces cooperative language-guided inverse plan search (CLIPS)
Our agent assists a human by modeling them as a cooperative planner who communicates joint plans to the assistant.
We evaluate these capabilities in two cooperative planning domains (Doors, Keys & Gems and VirtualHome)
arXiv Detail & Related papers (2024-02-27T23:06:53Z) - Robot Trajectron: Trajectory Prediction-based Shared Control for Robot
Manipulation [2.273531916003657]
We develop a novel intent estimator dubbed the emphRobot Trajectron (RT)
RT produces a probabilistic representation of the robot's anticipated trajectory based on its recent position, velocity and acceleration history.
We derive a novel shared-control solution that combines RT's predictive capacity to a representation of the locations of potential reaching targets.
arXiv Detail & Related papers (2024-02-04T14:18:20Z) - Value of Assistance for Grasping [6.452975320319021]
We provide a measure for assessing the expected effect a specific observation will have on the robot's ability to complete its task.
We evaluate our suggested measure in simulated and real-world collaborative grasping settings.
arXiv Detail & Related papers (2023-10-22T20:25:08Z) - Decision Making for Human-in-the-loop Robotic Agents via
Uncertainty-Aware Reinforcement Learning [13.184897303302971]
In a Human-in-the-Loop paradigm, a robotic agent is able to act mostly autonomously in solving a task, but can request help from an external expert when needed.
We present a Reinforcement Learning based approach to this problem, where a semi-autonomous agent asks for external assistance when it has low confidence in the eventual success of the task.
We show that our method makes effective use of a limited budget of expert calls at run-time, despite having no access to the expert at training time.
arXiv Detail & Related papers (2023-03-12T17:22:54Z) - When to Ask for Help: Proactive Interventions in Autonomous
Reinforcement Learning [57.53138994155612]
A long-term goal of reinforcement learning is to design agents that can autonomously interact and learn in the world.
A critical challenge is the presence of irreversible states which require external assistance to recover from, such as when a robot arm has pushed an object off of a table.
We propose an algorithm that efficiently learns to detect and avoid states that are irreversible, and proactively asks for help in case the agent does enter them.
arXiv Detail & Related papers (2022-10-19T17:57:24Z) - Multi-Agent Neural Rewriter for Vehicle Routing with Limited Disclosure
of Costs [65.23158435596518]
Solving the multi-vehicle routing problem as a team Markov game with partially observable costs.
Our multi-agent reinforcement learning approach, the so-called multi-agent Neural Rewriter, builds on the single-agent Neural Rewriter to solve the problem by iteratively rewriting solutions.
arXiv Detail & Related papers (2022-06-13T09:17:40Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z) - Building Affordance Relations for Robotic Agents - A Review [7.50722199393581]
Affordances describe the possibilities for an agent to perform actions with an object.
We review and find common ground amongst different strategies that use the concept of affordances within robotic tasks.
We identify and discuss a range of interesting research directions involving affordances that have the potential to improve the capabilities of an AI agent.
arXiv Detail & Related papers (2021-05-14T08:35:18Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Symbiotic System Design for Safe and Resilient Autonomous Robotics in
Offshore Wind Farms [3.5409202655473724]
Barriers to Beyond Visual Line of Sight (BVLOS) robotics include operational safety compliance and resilience.
We propose a symbiotic system; reflecting the lifecycle learning and co-evolution with knowledge sharing for mutual gain of robotic platforms and remote human operators.
Our methodology enables the run-time verification of safety, reliability and resilience during autonomous missions.
arXiv Detail & Related papers (2021-01-23T11:58:16Z) - AvE: Assistance via Empowerment [77.08882807208461]
We propose a new paradigm for assistance by instead increasing the human's ability to control their environment.
This task-agnostic objective preserves the person's autonomy and ability to achieve any eventual state.
arXiv Detail & Related papers (2020-06-26T04:40:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.