R2H: Building Multimodal Navigation Helpers that Respond to Help
Requests
- URL: http://arxiv.org/abs/2305.14260v2
- Date: Tue, 17 Oct 2023 17:46:41 GMT
- Title: R2H: Building Multimodal Navigation Helpers that Respond to Help
Requests
- Authors: Yue Fan, Jing Gu, Kaizhi Zheng, Xin Eric Wang
- Abstract summary: We first introduce a novel benchmark, Respond to Help Requests (R2H), to promote the development of multi-modal navigation helpers.
R2H mainly includes two tasks: (1) Respond to Dialog History (RDH), which assesses the helper agent's ability to generate informative responses based on a given dialog history, and (2) Respond during Interaction (RdI), which evaluates the effectiveness and efficiency of the response during consistent cooperation with a task performer.
- Score: 30.695642371684663
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intelligent navigation-helper agents are critical as they can navigate users
in unknown areas through environmental awareness and conversational ability,
serving as potential accessibility tools for individuals with disabilities. In
this work, we first introduce a novel benchmark, Respond to Help Requests
(R2H), to promote the development of multi-modal navigation helpers capable of
responding to requests for help, utilizing existing dialog-based embodied
datasets. R2H mainly includes two tasks: (1) Respond to Dialog History (RDH),
which assesses the helper agent's ability to generate informative responses
based on a given dialog history, and (2) Respond during Interaction (RdI),
which evaluates the effectiveness and efficiency of the response during
consistent cooperation with a task performer. Furthermore, we explore two
approaches to construct the navigation-helper agent, including fine-tuning a
novel task-oriented multi-modal response generation model that can see and
respond, named SeeRee, and employing a multi-modal large language model in a
zero-shot manner. Analysis of the task and method was conducted based on both
automatic benchmarking and human evaluations. Project website:
https://sites.google.com/view/response2helprequests/home.
Related papers
- Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent [102.31558123570437]
Multimodal Retrieval Augmented Generation (mRAG) plays an important role in mitigating the "hallucination" issue inherent in multimodal large language models (MLLMs)
We propose the first self-adaptive planning agent for multimodal retrieval, OmniSearch.
arXiv Detail & Related papers (2024-11-05T09:27:21Z) - ReSpAct: Harmonizing Reasoning, Speaking, and Acting Towards Building Large Language Model-Based Conversational AI Agents [11.118991548784459]
Large language model (LLM)-based agents have been increasingly used to interact with external environments.
Current frameworks do not enable these agents to work with users and interact with them to align on the details of their tasks.
This work introduces ReSpAct, a novel framework that combines the essential skills for building task-oriented "conversational" agents.
arXiv Detail & Related papers (2024-11-01T15:57:45Z) - RAG based Question-Answering for Contextual Response Prediction System [0.4660328753262075]
Large Language Models (LLMs) have shown versatility in various Natural Language Processing (NLP) tasks.
Retrieval Augmented Generation (RAG) emerges as a promising technique to address this challenge.
This paper introduces an end-to-end framework that employs LLMs with RAG capabilities for industry use cases.
arXiv Detail & Related papers (2024-09-05T17:14:23Z) - Manual-Guided Dialogue for Flexible Conversational Agents [84.46598430403886]
How to build and use dialogue data efficiently, and how to deploy models in different domains at scale can be critical issues in building a task-oriented dialogue system.
We propose a novel manual-guided dialogue scheme, where the agent learns the tasks from both dialogue and manuals.
Our proposed scheme reduces the dependence of dialogue models on fine-grained domain ontology, and makes them more flexible to adapt to various domains.
arXiv Detail & Related papers (2022-08-16T08:21:12Z) - INSCIT: Information-Seeking Conversations with Mixed-Initiative
Interactions [47.90088587508672]
InSCIt is a dataset for Information-Seeking Conversations with mixed-initiative Interactions.
It contains 4.7K user-agent turns from 805 human-human conversations.
We report results of two systems based on state-of-the-art models of conversational knowledge identification and open-domain question answering.
arXiv Detail & Related papers (2022-07-02T06:18:12Z) - KETOD: Knowledge-Enriched Task-Oriented Dialogue [77.59814785157877]
Existing studies in dialogue system research mostly treat task-oriented dialogue and chit-chat as separate domains.
We investigate how task-oriented dialogue and knowledge-grounded chit-chat can be effectively integrated into a single model.
arXiv Detail & Related papers (2022-05-11T16:01:03Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z) - RMM: A Recursive Mental Model for Dialog Navigation [102.42641990401735]
Language-guided robots must be able to both ask humans questions and understand answers.
Inspired by theory of mind, we propose the Recursive Mental Model (RMM)
We demonstrate that RMM enables better generalization to novel environments.
arXiv Detail & Related papers (2020-05-02T06:57:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.