SECURE: Semantics-aware Embodied Conversation under Unawareness for Lifelong Robot Learning
- URL: http://arxiv.org/abs/2409.17755v2
- Date: Mon, 10 Feb 2025 18:39:13 GMT
- Title: SECURE: Semantics-aware Embodied Conversation under Unawareness for Lifelong Robot Learning
- Authors: Rimvydas Rubavicius, Peter David Fagan, Alex Lascarides, Subramanian Ramamoorthy,
- Abstract summary: SECURE is an interactive task learning framework designed to solve such problems.
It uses embodied conversation to fix its deficient domain model.
We demonstrate that learning to solve rearrangement under unawareness is more data efficient when the agent is semantics-aware.
- Score: 17.125080112897102
- License:
- Abstract: This paper addresses a challenging interactive task learning scenario we call rearrangement under unawareness: to manipulate a rigid-body environment in a context where the agent is unaware of a concept that is key to solving the instructed task. We propose SECURE, an interactive task learning framework designed to solve such problems. It uses embodied conversation to fix its deficient domain model -- through dialogue, the agent discovers and then learns to exploit unforeseen possibilities. In particular, SECURE learns from the user's embodied corrective feedback when it makes a mistake, and it makes strategic dialogue decisions to reveal useful evidence about novel concepts for solving the instructed task. Together, these abilities allow the agent to generalise to subsequent tasks using newly acquired knowledge. We demonstrate that learning to solve rearrangement under unawareness is more data efficient when the agent is semantics-aware -- that is, during both learning and inference it augments the evidence from the user's embodied conversation with its logical consequences, stemming from semantic analysis.
Related papers
- Interactive Agents to Overcome Ambiguity in Software Engineering [61.40183840499932]
AI agents are increasingly being deployed to automate tasks, often based on ambiguous and underspecified user instructions.
Making unwarranted assumptions and failing to ask clarifying questions can lead to suboptimal outcomes.
We study the ability of LLM agents to handle ambiguous instructions in interactive code generation settings by evaluating proprietary and open-weight models on their performance.
arXiv Detail & Related papers (2025-02-18T17:12:26Z) - Memento No More: Coaching AI Agents to Master Multiple Tasks via Hints Internalization [56.674356045200696]
We propose a novel method to train AI agents to incorporate knowledge and skills for multiple tasks without the need for cumbersome note systems or prior high-quality demonstration data.
Our approach employs an iterative process where the agent collects new experiences, receives corrective feedback from humans in the form of hints, and integrates this feedback into its weights.
We demonstrate the efficacy of our approach by implementing it in a Llama-3-based agent which, after only a few rounds of feedback, outperforms advanced models GPT-4o and DeepSeek-V3 in a taskset.
arXiv Detail & Related papers (2025-02-03T17:45:46Z) - Explaining Agent's Decision-making in a Hierarchical Reinforcement
Learning Scenario [0.6643086804649938]
Reinforcement learning is a machine learning approach based on behavioral psychology.
In this work, we make use of the memory-based explainable reinforcement learning method in a hierarchical environment composed of sub-tasks.
arXiv Detail & Related papers (2022-12-14T01:18:45Z) - Semantic Interactive Learning for Text Classification: A Constructive
Approach for Contextual Interactions [0.0]
We propose a novel interaction framework called Semantic Interactive Learning for the text domain.
We frame the problem of incorporating constructive and contextual feedback into the learner as a task to find an architecture that enables more semantic alignment between humans and machines.
We introduce a technique called SemanticPush that is effective for translating conceptual corrections of humans to non-extrapolating training examples.
arXiv Detail & Related papers (2022-09-07T08:13:45Z) - Utterance Rewriting with Contrastive Learning in Multi-turn Dialogue [22.103162555263143]
We introduce contrastive learning and multi-task learning to jointly model the problem.
Our proposed model achieves state-of-the-art performance on several public datasets.
arXiv Detail & Related papers (2022-03-22T10:13:27Z) - Teachable Reinforcement Learning via Advice Distillation [161.43457947665073]
We propose a new supervision paradigm for interactive learning based on "teachable" decision-making systems that learn from structured advice provided by an external teacher.
We show that agents that learn from advice can acquire new skills with significantly less human supervision than standard reinforcement learning algorithms.
arXiv Detail & Related papers (2022-03-19T03:22:57Z) - Continual Prompt Tuning for Dialog State Tracking [58.66412648276873]
A desirable dialog system should be able to continually learn new skills without forgetting old ones.
We present Continual Prompt Tuning, a parameter-efficient framework that not only avoids forgetting but also enables knowledge transfer between tasks.
arXiv Detail & Related papers (2022-03-13T13:22:41Z) - Learning When and What to Ask: a Hierarchical Reinforcement Learning
Framework [17.017688226277834]
We formulate a hierarchical reinforcement learning framework for learning to decide when to request additional information from humans.
Results on a simulated human-assisted navigation problem demonstrate the effectiveness of our framework.
arXiv Detail & Related papers (2021-10-14T01:30:36Z) - Learning Adaptive Language Interfaces through Decomposition [89.21937539950966]
We introduce a neural semantic parsing system that learns new high-level abstractions through decomposition.
Users interactively teach the system by breaking down high-level utterances describing novel behavior into low-level steps.
arXiv Detail & Related papers (2020-10-11T08:27:07Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z) - Dialog Policy Learning for Joint Clarification and Active Learning
Queries [24.420113907842147]
We train a hierarchical dialog policy to jointly perform both clarification and active learning.
We show that jointly learning dialog policies for clarification and active learning is more effective than the use of static dialog policies for one or both of these functions.
arXiv Detail & Related papers (2020-06-09T18:53:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.