Robot Task Planning and Situation Handling in Open Worlds
- URL: http://arxiv.org/abs/2210.01287v1
- Date: Tue, 4 Oct 2022 00:21:00 GMT
- Title: Robot Task Planning and Situation Handling in Open Worlds
- Authors: Yan Ding, Xiaohan Zhang, Saeid Amiri, Nieqing Cao, Hao Yang, Chad
Esselink, Shiqi Zhang
- Abstract summary: This paper introduces a novel algorithm (COWP) for open-world task planning and situation handling.
COWP dynamically augments the robot's action knowledge with task-oriented common sense.
Our approach significantly outperforms competitive baselines from the literature in the success rate of service tasks.
- Score: 17.812483295011212
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated task planning algorithms have been developed to help robots
complete complex tasks that require multiple actions. Most of those algorithms
have been developed for "closed worlds" assuming complete world knowledge is
provided. However, the real world is generally open, and the robots frequently
encounter unforeseen situations that can potentially break the planner's
completeness. This paper introduces a novel algorithm (COWP) for open-world
task planning and situation handling that dynamically augments the robot's
action knowledge with task-oriented common sense. In particular, common sense
is extracted from Large Language Models based on the current task at hand and
robot skills. For systematic evaluations, we collected a dataset that includes
561 execution-time situations in a dining domain, where each situation
corresponds to a state instance of a robot being potentially unable to complete
a task using a solution that normally works. Experimental results show that our
approach significantly outperforms competitive baselines from the literature in
the success rate of service tasks. Additionally, we have demonstrated COWP
using a mobile manipulator. Supplementary materials are available at:
https://cowplanning.github.io/
Related papers
- Interactive Planning Using Large Language Models for Partially
Observable Robotics Tasks [54.60571399091711]
Large Language Models (LLMs) have achieved impressive results in creating robotic agents for performing open vocabulary tasks.
We present an interactive planning technique for partially observable tasks using LLMs.
arXiv Detail & Related papers (2023-12-11T22:54:44Z) - Knowledge-Driven Robot Program Synthesis from Human VR Demonstrations [16.321053835017942]
We present a system for automatically generating executable robot control programs from human task demonstrations in virtual reality (VR)
We leverage common-sense knowledge and game engine-based physics to semantically interpret human VR demonstrations.
We demonstrate our approach in the context of force-sensitive fetch-and-place for a robotic shopping assistant.
arXiv Detail & Related papers (2023-06-05T09:37:53Z) - Integrating Action Knowledge and LLMs for Task Planning and Situation
Handling in Open Worlds [10.077350377962482]
This paper introduces a novel framework, called COWP, for open-world task planning and situation handling.
COWP dynamically augments the robot's action knowledge, including the preconditions and effects of actions, with task-oriented commonsense knowledge.
Experimental results show that our approach outperforms competitive baselines from the literature in the success rate of service tasks.
arXiv Detail & Related papers (2023-05-27T22:30:15Z) - ProgPrompt: Generating Situated Robot Task Plans using Large Language
Models [68.57918965060787]
Large language models (LLMs) can be used to score potential next actions during task planning.
We present a programmatic LLM prompt structure that enables plan generation functional across situated environments.
arXiv Detail & Related papers (2022-09-22T20:29:49Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Learning Generalizable Robotic Reward Functions from "In-The-Wild" Human
Videos [59.58105314783289]
Domain-agnostic Video Discriminator (DVD) learns multitask reward functions by training a discriminator to classify whether two videos are performing the same task.
DVD can generalize by virtue of learning from a small amount of robot data with a broad dataset of human videos.
DVD can be combined with visual model predictive control to solve robotic manipulation tasks on a real WidowX200 robot in an unseen environment from a single human demo.
arXiv Detail & Related papers (2021-03-31T05:25:05Z) - Reactive Long Horizon Task Execution via Visual Skill and Precondition
Models [59.76233967614774]
We describe an approach for sim-to-real training that can accomplish unseen robotic tasks using models learned in simulation to ground components of a simple task planner.
We show an increase in success rate from 91.6% to 98% in simulation and from 10% to 80% success rate in the real-world as compared with naive baselines.
arXiv Detail & Related papers (2020-11-17T15:24:01Z) - COG: Connecting New Skills to Past Experience with Offline Reinforcement
Learning [78.13740204156858]
We show that we can reuse prior data to extend new skills simply through dynamic programming.
We demonstrate the effectiveness of our approach by chaining together several behaviors seen in prior datasets for solving a new task.
We train our policies in an end-to-end fashion, mapping high-dimensional image observations to low-level robot control commands.
arXiv Detail & Related papers (2020-10-27T17:57:29Z) - Enabling human-like task identification from natural conversation [7.00597813134145]
We provide a non-trivial method to combine an NLP engine and a planner such that a robot can successfully identify tasks and all the relevant parameters and generate an accurate plan for the task.
This work makes a significant stride towards enabling a human-like task understanding capability in a robot.
arXiv Detail & Related papers (2020-08-23T17:19:23Z) - iCORPP: Interleaved Commonsense Reasoning and Probabilistic Planning on
Robots [46.13039152809055]
We present a novel algorithm, called iCORPP, to simultaneously estimate the current world state, reason about world dynamics, and construct task-oriented controllers.
Results show significant improvements in scalability, efficiency, and adaptiveness, compared to competitive baselines.
arXiv Detail & Related papers (2020-04-18T17:46:59Z) - PPMC RL Training Algorithm: Rough Terrain Intelligent Robots through
Reinforcement Learning [4.314956204483074]
This paper introduces a generic training algorithm teaching generalized PPMC in rough environments to any robot.
We show through experiments that the robot learns to generalize to new rough terrain maps, retaining a 100% success rate.
To the best of our knowledge, this is the first paper to introduce a generic training algorithm teaching generalized PPMC in rough environments to any robot.
arXiv Detail & Related papers (2020-03-02T10:14:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.