Learning and Reasoning for Robot Dialog and Navigation Tasks
- URL: http://arxiv.org/abs/2005.09833v2
- Date: Mon, 31 Aug 2020 02:07:43 GMT
- Title: Learning and Reasoning for Robot Dialog and Navigation Tasks
- Authors: Keting Lu, Shiqi Zhang, Peter Stone, Xiaoping Chen
- Abstract summary: We develop algorithms for robot task completions, while looking into the complementary strengths of reinforcement learning and probabilistic reasoning techniques.
The robots learn from trial-and-error experiences to augment their declarative knowledge base.
We have implemented and evaluated the developed algorithms using mobile robots conducting dialog and navigation tasks.
- Score: 44.364322669414776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning and probabilistic reasoning algorithms aim at learning
from interaction experiences and reasoning with probabilistic contextual
knowledge respectively. In this research, we develop algorithms for robot task
completions, while looking into the complementary strengths of reinforcement
learning and probabilistic reasoning techniques. The robots learn from
trial-and-error experiences to augment their declarative knowledge base, and
the augmented knowledge can be used for speeding up the learning process in
potentially different tasks. We have implemented and evaluated the developed
algorithms using mobile robots conducting dialog and navigation tasks. From the
results, we see that our robot's performance can be improved by both reasoning
with human knowledge and learning from task-completion experience. More
interestingly, the robot was able to learn from navigation tasks to improve its
dialog strategies.
Related papers
- SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation [58.14969377419633]
We propose spire, a system that decomposes tasks into smaller learning subproblems and second combines imitation and reinforcement learning to maximize their strengths.
We find that spire outperforms prior approaches that integrate imitation learning, reinforcement learning, and planning by 35% to 50% in average task performance.
arXiv Detail & Related papers (2024-10-23T17:42:07Z) - Unsupervised Skill Discovery for Robotic Manipulation through Automatic Task Generation [17.222197596599685]
We propose a Skill Learning approach that discovers composable behaviors by solving a large number of autonomously generated tasks.
Our method learns skills allowing the robot to consistently and robustly interact with objects in its environment.
The learned skills can be used to solve a set of unseen manipulation tasks, in simulation as well as on a real robotic platform.
arXiv Detail & Related papers (2024-10-07T09:19:13Z) - SECURE: Semantics-aware Embodied Conversation under Unawareness for Lifelong Robot Learning [17.125080112897102]
This paper addresses a challenging interactive task learning scenario where the robot is unaware of a concept that's key to solving the instructed task.
We propose SECURE, an interactive task learning framework designed to solve such problems by fixing a deficient domain model using embodied conversation.
Using SECURE, the robot not only learns from the user's corrective feedback when it makes a mistake, but it also learns to make strategic dialogue decisions for revealing useful evidence about novel concepts for solving the instructed task.
arXiv Detail & Related papers (2024-09-26T11:40:07Z) - Continual Skill and Task Learning via Dialogue [3.3511259017219297]
Continual and interactive robot learning is a challenging problem as the robot is present with human users.
We present a framework for robots to query and learn visuo-motor robot skills and task relevant information via natural language dialog interactions with human users.
arXiv Detail & Related papers (2024-09-05T01:51:54Z) - Offline Imitation Learning Through Graph Search and Retrieval [57.57306578140857]
Imitation learning is a powerful machine learning algorithm for a robot to acquire manipulation skills.
We propose GSR, a simple yet effective algorithm that learns from suboptimal demonstrations through Graph Search and Retrieval.
GSR can achieve a 10% to 30% higher success rate and over 30% higher proficiency compared to baselines.
arXiv Detail & Related papers (2024-07-22T06:12:21Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Learning robot motor skills with mixed reality [0.8121462458089141]
Mixed Reality (MR) has recently shown great success as an intuitive interface for enabling end-users to teach robots.
We propose a learning framework where end-users teach robots a) motion demonstrations, b) task constraints, c) planning representations, and d) object information.
We hypothesize that conveying this world knowledge will be intuitive with an MR interface, and that a sample-efficient motor skill learning framework will enable robots to effectively solve complex tasks.
arXiv Detail & Related papers (2022-03-21T20:25:40Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Hierarchical Affordance Discovery using Intrinsic Motivation [69.9674326582747]
We propose an algorithm using intrinsic motivation to guide the learning of affordances for a mobile robot.
This algorithm is capable to autonomously discover, learn and adapt interrelated affordances without pre-programmed actions.
Once learned, these affordances may be used by the algorithm to plan sequences of actions in order to perform tasks of various difficulties.
arXiv Detail & Related papers (2020-09-23T07:18:21Z) - Deep Reinforcement Learning with Interactive Feedback in a Human-Robot
Environment [1.2998475032187096]
We propose a deep reinforcement learning approach with interactive feedback to learn a domestic task in a human-robot scenario.
We compare three different learning methods using a simulated robotic arm for the task of organizing different objects.
The obtained results show that a learner agent, using either agent-IDeepRL or human-IDeepRL, completes the given task earlier and has fewer mistakes compared to the autonomous DeepRL approach.
arXiv Detail & Related papers (2020-07-07T11:55:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.