ReSee: Responding through Seeing Fine-grained Visual Knowledge in
Open-domain Dialogue
- URL: http://arxiv.org/abs/2305.13602v2
- Date: Fri, 20 Oct 2023 04:59:10 GMT
- Title: ReSee: Responding through Seeing Fine-grained Visual Knowledge in
Open-domain Dialogue
- Authors: Haoqin Tu, Yitong Li, Fei Mi, Zhongliang Yang
- Abstract summary: We provide a new paradigm of constructing multimodal dialogues by splitting visual knowledge into finer granularity.
To boost the accuracy and diversity of augmented visual information, we retrieve them from the Internet or a large image dataset.
By leveraging text and vision knowledge, ReSee can produce informative responses with real-world visual concepts.
- Score: 34.223466503256766
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Incorporating visual knowledge into text-only dialogue systems has become a
potential direction to imitate the way humans think, imagine, and communicate.
However, existing multimodal dialogue systems are either confined by the scale
and quality of available datasets or the coarse concept of visual knowledge. To
address these issues, we provide a new paradigm of constructing multimodal
dialogues as well as two datasets extended from text-only dialogues under such
paradigm (ReSee-WoW, ReSee-DD). We propose to explicitly split the visual
knowledge into finer granularity (``turn-level'' and ``entity-level''). To
further boost the accuracy and diversity of augmented visual information, we
retrieve them from the Internet or a large image dataset. To demonstrate the
superiority and universality of the provided visual knowledge, we propose a
simple but effective framework ReSee to add visual representation into vanilla
dialogue models by modality concatenations. We also conduct extensive
experiments and ablations w.r.t. different model configurations and visual
knowledge settings. Empirical, encouraging results not only demonstrate the
effectiveness of introducing visual knowledge at both entity and turn level but
also verify the proposed model ReSee outperforms several state-of-the-art
methods on automatic and human evaluations. By leveraging text and vision
knowledge, ReSee can produce informative responses with real-world visual
concepts. Our code is available at https://github.com/ImKeTT/ReSee.
Related papers
- Visualizing Dialogues: Enhancing Image Selection through Dialogue Understanding with Large Language Models [25.070424546200293]
We present a novel approach leveraging the robust reasoning capabilities of large language models (LLMs) to generate precise dialogue-associated visual descriptors.
Experiments conducted on benchmark data validate the effectiveness of our proposed approach in deriving concise and accurate visual descriptors.
Our findings demonstrate the method's generalizability across diverse visual cues, various LLMs, and different datasets.
arXiv Detail & Related papers (2024-07-04T03:50:30Z) - Contextual Object Detection with Multimodal Large Language Models [66.15566719178327]
We introduce a novel research problem of contextual object detection.
Three representative scenarios are investigated, including the language cloze test, visual captioning, and question answering.
We present ContextDET, a unified multimodal model that is capable of end-to-end differentiable modeling of visual-language contexts.
arXiv Detail & Related papers (2023-05-29T17:50:33Z) - See, Think, Confirm: Interactive Prompting Between Vision and Language
Models for Knowledge-based Visual Reasoning [60.43585179885355]
We propose a novel framework named Interactive Prompting Visual Reasoner (IPVR) for few-shot knowledge-based visual reasoning.
IPVR contains three stages, see, think and confirm.
We conduct experiments on a range of knowledge-based visual reasoning datasets.
arXiv Detail & Related papers (2023-01-12T18:59:50Z) - Understanding ME? Multimodal Evaluation for Fine-grained Visual
Commonsense [98.70218717851665]
It is unclear whether the models really understand the visual scene and underlying commonsense knowledge due to limited evaluation data resources.
We present a Multimodal Evaluation (ME) pipeline to automatically generate question-answer pairs to test models' understanding of the visual scene, text, and related knowledge.
We then take a step further to show that training with the ME data boosts the model's performance in standard VCR evaluation.
arXiv Detail & Related papers (2022-11-10T21:44:33Z) - Visually-Augmented Language Modeling [137.36789885105642]
We propose a novel pre-training framework, named VaLM, to Visually-augment text tokens with retrieved relevant images for Language Modeling.
With the visually-augmented context, VaLM uses a visual knowledge fusion layer to enable multimodal grounded language modeling.
We evaluate the proposed model on various multimodal commonsense reasoning tasks, which require visual information to excel.
arXiv Detail & Related papers (2022-05-20T13:41:12Z) - K-LITE: Learning Transferable Visual Models with External Knowledge [242.3887854728843]
K-LITE (Knowledge-augmented Language-Image Training and Evaluation) is a strategy to leverage external knowledge to build transferable visual systems.
In training, it enriches entities in natural language with WordNet and Wiktionary knowledge.
In evaluation, the natural language is also augmented with external knowledge and then used to reference learned visual concepts.
arXiv Detail & Related papers (2022-04-20T04:47:01Z) - Reasoning with Multi-Structure Commonsense Knowledge in Visual Dialog [12.034554338597067]
We propose a novel model by Reasoning with Multi-structure Commonsense Knowledge (RMK)
In our model, the external knowledge is represented with sentence-level facts and graph-level facts.
On top of these multi-structure representations, our model can capture relevant knowledge and incorporate them into the vision and semantic features.
arXiv Detail & Related papers (2022-04-10T13:12:10Z) - Modeling Coreference Relations in Visual Dialog [18.926582410644375]
The occurrences of coreference relations in the dialog makes it a more challenging task than visual question-answering.
We propose two soft constraints that can improve the model's ability of resolving coreferences in dialog in an unsupervised way.
arXiv Detail & Related papers (2022-03-06T15:22:24Z) - Modeling Explicit Concerning States for Reinforcement Learning in Visual
Dialogue [43.42833961578857]
We propose Explicit Concerning States (ECS) to represent what visual contents are concerned at each round and what have been concerned throughout the Visual Dialogue.
ECS is modeled from multimodal information and is represented explicitly.
Based on ECS, we formulate two intuitive and interpretable rewards to encourage the Visual Dialogue agents to converse on diverse and informative visual information.
arXiv Detail & Related papers (2021-07-12T08:15:35Z) - KBGN: Knowledge-Bridge Graph Network for Adaptive Vision-Text Reasoning
in Visual Dialogue [17.119682693725718]
We propose a novel Knowledge-Bridge Graph Network (KBGN) model to bridge the cross-modal semantic relations between vision and text knowledge.
We show that our model outperforms existing models with state-of-the-art results.
arXiv Detail & Related papers (2020-08-11T17:03:06Z) - ORD: Object Relationship Discovery for Visual Dialogue Generation [60.471670447176656]
We propose an object relationship discovery (ORD) framework to preserve the object interactions for visual dialogue generation.
A hierarchical graph convolutional network (HierGCN) is proposed to retain the object nodes and neighbour relationships locally, and then refines the object-object connections globally.
Experiments have proved that the proposed method can significantly improve the quality of dialogue by utilising the contextual information of visual relationships.
arXiv Detail & Related papers (2020-06-15T12:25:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.