Collaborative Reasoning on Multi-Modal Semantic Graphs for
Video-Grounded Dialogue Generation
- URL: http://arxiv.org/abs/2210.12460v1
- Date: Sat, 22 Oct 2022 14:45:29 GMT
- Title: Collaborative Reasoning on Multi-Modal Semantic Graphs for
Video-Grounded Dialogue Generation
- Authors: Xueliang Zhao, Yuxuan Wang, Chongyang Tao, Chenshuo Wang and Dongyan
Zhao
- Abstract summary: We study video-grounded dialogue generation, where a response is generated based on the dialogue context and the associated video.
The primary challenges of this task lie in (1) the difficulty of integrating video data into pre-trained language models (PLMs)
We propose a multi-agent reinforcement learning method to collaboratively perform reasoning on different modalities.
- Score: 53.87485260058957
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We study video-grounded dialogue generation, where a response is generated
based on the dialogue context and the associated video. The primary challenges
of this task lie in (1) the difficulty of integrating video data into
pre-trained language models (PLMs) which presents obstacles to exploiting the
power of large-scale pre-training; and (2) the necessity of taking into account
the complementarity of various modalities throughout the reasoning process.
Although having made remarkable progress in video-grounded dialogue generation,
existing methods still fall short when it comes to integrating with PLMs in a
way that allows information from different modalities to complement each other.
To alleviate these issues, we first propose extracting pertinent information
from videos and turning it into reasoning paths that are acceptable to PLMs.
Additionally, we propose a multi-agent reinforcement learning method to
collaboratively perform reasoning on different modalities (i.e., video and
dialogue context). Empirical experiment results on two public datasets indicate
that the proposed model can significantly outperform state-of-the-art models by
large margins on both automatic and human evaluations.
Related papers
- Prompting Video-Language Foundation Models with Domain-specific Fine-grained Heuristics for Video Question Answering [71.62961521518731]
HeurVidQA is a framework that leverages domain-specific entity-actions to refine pre-trained video-language foundation models.
Our approach treats these models as implicit knowledge engines, employing domain-specific entity-action prompters to direct the model's focus toward precise cues that enhance reasoning.
arXiv Detail & Related papers (2024-10-12T06:22:23Z) - VIMI: Grounding Video Generation through Multi-modal Instruction [89.90065445082442]
Existing text-to-video diffusion models rely solely on text-only encoders for their pretraining.
We construct a large-scale multimodal prompt dataset by employing retrieval methods to pair in-context examples with the given text prompts.
We finetune the model from the first stage on three video generation tasks, incorporating multi-modal instructions.
arXiv Detail & Related papers (2024-07-08T18:12:49Z) - Interactive Text-to-Image Retrieval with Large Language Models: A Plug-and-Play Approach [33.231639257323536]
In this paper, we address the issue of dialogue-form context query within the interactive text-to-image retrieval task.
By reformulating the dialogue-form context, we eliminate the necessity of fine-tuning a retrieval model on existing visual dialogue data.
We construct the LLM questioner to generate non-redundant questions about the attributes of the target image.
arXiv Detail & Related papers (2024-06-05T16:09:01Z) - Pre-training Multi-party Dialogue Models with Latent Discourse Inference [85.9683181507206]
We pre-train a model that understands the discourse structure of multi-party dialogues, namely, to whom each utterance is replying.
To fully utilize the unlabeled data, we propose to treat the discourse structures as latent variables, then jointly infer them and pre-train the discourse-aware model.
arXiv Detail & Related papers (2023-05-24T14:06:27Z) - $C^3$: Compositional Counterfactual Contrastive Learning for
Video-grounded Dialogues [97.25466640240619]
Video-grounded dialogue systems aim to integrate video understanding and dialogue understanding to generate responses relevant to both the dialogue and video context.
Most existing approaches employ deep learning models and have achieved remarkable performance, given the relatively small datasets available.
We propose a novel approach of Compositional Counterfactual Contrastive Learning to develop contrastive training between factual and counterfactual samples in video-grounded dialogues.
arXiv Detail & Related papers (2021-06-16T16:05:27Z) - Video-Grounded Dialogues with Pretrained Generation Language Models [88.15419265622748]
We leverage the power of pre-trained language models for improving video-grounded dialogue.
We propose a framework by formulating sequence-to-grounded dialogue tasks as a sequence-to-grounded task.
Our framework allows fine-tuning language models to capture dependencies across multiple modalities.
arXiv Detail & Related papers (2020-06-27T08:24:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.