SECURE: Semantics-aware Embodied Conversation under Unawareness for Lifelong Robot Learning
- URL: http://arxiv.org/abs/2409.17755v1
- Date: Thu, 26 Sep 2024 11:40:07 GMT
- Title: SECURE: Semantics-aware Embodied Conversation under Unawareness for Lifelong Robot Learning
- Authors: Rimvydas Rubavicius, Peter David Fagan, Alex Lascarides, Subramanian Ramamoorthy,
- Abstract summary: This paper addresses a challenging interactive task learning scenario where the robot is unaware of a concept that's key to solving the instructed task.
We propose SECURE, an interactive task learning framework designed to solve such problems by fixing a deficient domain model using embodied conversation.
Using SECURE, the robot not only learns from the user's corrective feedback when it makes a mistake, but it also learns to make strategic dialogue decisions for revealing useful evidence about novel concepts for solving the instructed task.
- Score: 17.125080112897102
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper addresses a challenging interactive task learning scenario we call rearrangement under unawareness: to manipulate a rigid-body environment in a context where the robot is unaware of a concept that's key to solving the instructed task. We propose SECURE, an interactive task learning framework designed to solve such problems by fixing a deficient domain model using embodied conversation. Through dialogue, the robot discovers and then learns to exploit unforeseen possibilities. Using SECURE, the robot not only learns from the user's corrective feedback when it makes a mistake, but it also learns to make strategic dialogue decisions for revealing useful evidence about novel concepts for solving the instructed task. Together, these abilities allow the robot to generalise to subsequent tasks using newly acquired knowledge. We demonstrate that a robot that is semantics-aware -- that is, it exploits the logical consequences of both sentence and discourse semantics in the learning and inference process -- learns to solve rearrangement under unawareness more effectively than a robot that lacks such capabilities.
Related papers
- Unsupervised Skill Discovery for Robotic Manipulation through Automatic Task Generation [17.222197596599685]
We propose a Skill Learning approach that discovers composable behaviors by solving a large number of autonomously generated tasks.
Our method learns skills allowing the robot to consistently and robustly interact with objects in its environment.
The learned skills can be used to solve a set of unseen manipulation tasks, in simulation as well as on a real robotic platform.
arXiv Detail & Related papers (2024-10-07T09:19:13Z) - Self-Explainable Affordance Learning with Embodied Caption [63.88435741872204]
We introduce Self-Explainable Affordance learning (SEA) with embodied caption.
SEA enables robots to articulate their intentions and bridge the gap between explainable vision-language caption and visual affordance learning.
We propose a novel model to effectively combine affordance grounding with self-explanation in a simple but efficient manner.
arXiv Detail & Related papers (2024-04-08T15:22:38Z) - Incremental Learning of Humanoid Robot Behavior from Natural Interaction and Large Language Models [23.945922720555146]
We propose a system to achieve incremental learning of complex behavior from natural interaction.
We integrate the system in the robot cognitive architecture of the humanoid robot ARMAR-6.
arXiv Detail & Related papers (2023-09-08T13:29:05Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - "No, to the Right" -- Online Language Corrections for Robotic
Manipulation via Shared Autonomy [70.45420918526926]
We present LILAC, a framework for incorporating and adapting to natural language corrections online during execution.
Instead of discrete turn-taking between a human and robot, LILAC splits agency between the human and robot.
We show that our corrections-aware approach obtains higher task completion rates, and is subjectively preferred by users.
arXiv Detail & Related papers (2023-01-06T15:03:27Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Talk-to-Resolve: Combining scene understanding and spatial dialogue to
resolve granular task ambiguity for a collocated robot [15.408128612723882]
The utility of collocating robots largely depends on the easy and intuitive interaction mechanism with the human.
We present a system called Talk-to-Resolve (TTR) that enables a robot to initiate a coherent dialogue exchange with the instructor.
Our system can identify the stalemate and resolve them with appropriate dialogue exchange with 82% accuracy.
arXiv Detail & Related papers (2021-11-22T10:42:59Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Axiom Learning and Belief Tracing for Transparent Decision Making in
Robotics [8.566457170664926]
A robot's ability to provide descriptions of its decisions and beliefs promotes effective collaboration with humans.
Our architecture couples the complementary strengths of non-monotonic logical reasoning, deep learning, and decision-tree induction.
During reasoning and learning, the architecture enables a robot to provide on-demand relational descriptions of its decisions, beliefs, and the outcomes of hypothetical actions.
arXiv Detail & Related papers (2020-10-20T22:09:17Z) - Hierarchical Affordance Discovery using Intrinsic Motivation [69.9674326582747]
We propose an algorithm using intrinsic motivation to guide the learning of affordances for a mobile robot.
This algorithm is capable to autonomously discover, learn and adapt interrelated affordances without pre-programmed actions.
Once learned, these affordances may be used by the algorithm to plan sequences of actions in order to perform tasks of various difficulties.
arXiv Detail & Related papers (2020-09-23T07:18:21Z) - Learning and Reasoning for Robot Dialog and Navigation Tasks [44.364322669414776]
We develop algorithms for robot task completions, while looking into the complementary strengths of reinforcement learning and probabilistic reasoning techniques.
The robots learn from trial-and-error experiences to augment their declarative knowledge base.
We have implemented and evaluated the developed algorithms using mobile robots conducting dialog and navigation tasks.
arXiv Detail & Related papers (2020-05-20T03:20:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.