Audio-Visual LLM for Video Understanding
- URL: http://arxiv.org/abs/2312.06720v2
- Date: Wed, 13 Dec 2023 04:45:01 GMT
- Title: Audio-Visual LLM for Video Understanding
- Authors: Fangxun Shu, Lei Zhang, Hao Jiang, Cihang Xie
- Abstract summary: This paper presents Audio-Visual LLM, a Multimodal Large Language Model that takes both visual and auditory inputs for holistic video understanding.
We introduce a high-quality video instruction dataset, derived from GPT-4.
Experiments demonstrate that Audio-Visual LLM impressively achieves strong zero-shot results across a range of video understanding tasks.
- Score: 25.963166809113005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents Audio-Visual LLM, a Multimodal Large Language Model that
takes both visual and auditory inputs for holistic video understanding. A key
design is the modality-augmented training, which involves the integration of
modality-specific tokens engineered to activate the appropriate visual and/or
auditory encoder selectively. This mechanism is pivotal in enabling end-to-end
joint training with video data at different modalities, including visual-only,
audio-only, and audio-visual formats. Moreover, we introduce a high-quality
video instruction dataset, derived from GPT-4. This dataset allows Audio-Visual
LLM to adeptly process a variety of task-oriented video instructions, ranging
from multi-turn conversations and audio-visual narratives to complex reasoning
tasks. Extensive experiments demonstrate that Audio-Visual LLM impressively
achieves strong zero-shot results across a range of video understanding tasks.
For example, Audio-Visual LLM achieves an accuracy of 53.7% on MSRVTT-QA,
outperforming non-LLM-based InterVideo by 6.6% and LLM-based Valley by 4.4%,
respectively. Additionally, our Audio-Visual LLM also achieves competitive
performance on audio tasks (e.g., AudioCaps).
Related papers
- SAVEn-Vid: Synergistic Audio-Visual Integration for Enhanced Understanding in Long Video Context [19.224601064352846]
We introduce SAVEn-Vid, the first-ever long audio-visual video dataset comprising over 58k audio-visual instructions.
We present AVBench, a benchmark containing 2,500 QAs designed to evaluate models on enhanced audio-visual comprehension tasks within long video.
Experiments demonstrate that SAVEnVideo outperforms the best Video-LLM by 3.61% on the zero-shot long video task (Video-MME) and surpasses the leading audio-visual LLM by 1.29% on the zero-shot audio-visual task (Music-AVQA)
arXiv Detail & Related papers (2024-11-25T09:22:13Z) - Large Language Models Are Strong Audio-Visual Speech Recognition Learners [53.142635674428874]
Multimodal large language models (MLLMs) have recently become a focal point of research due to their formidable multimodal understanding capabilities.
We propose Llama-AVSR, a new MLLM with strong audio-visual speech recognition capabilities.
We evaluate our proposed approach on LRS3, the largest public AVSR benchmark, and we achieve new state-of-the-art results for the tasks of ASR and AVSR with a WER of 0.81% and 0.77%, respectively.
arXiv Detail & Related papers (2024-09-18T21:17:27Z) - Meerkat: Audio-Visual Large Language Model for Grounding in Space and Time [73.7845280328535]
We present Meerkat, an audio-visual LLM equipped with a fine-grained understanding of image and audio.
Meerkat can tackle challenging tasks such as audio referred image grounding, image guided audio temporal localization, and audio-visual fact-checking.
We achieve state-of-the-art performance on all these downstream tasks with a relative improvement of up to 37.12%.
arXiv Detail & Related papers (2024-07-01T23:32:25Z) - Empowering LLMs with Pseudo-Untrimmed Videos for Audio-Visual Temporal Understanding [33.85362137961572]
We introduce PU-VALOR, a comprehensive audio-visual dataset comprising over 114,000 pseudo-untrimmed videos with detailed temporal annotations.
PU-VALOR is derived from the large-scale but coarse-annotated audio-visual dataset VALOR, through a subtle method involving event-based video clustering.
We develop AVicuna, a model capable of aligning audio-visual events with temporal intervals and corresponding text tokens.
arXiv Detail & Related papers (2024-03-24T19:50:49Z) - Fine-grained Audio-Visual Joint Representations for Multimodal Large
Language Models [25.660343393359565]
This paper proposes a fine-grained audio-visual joint representation (FAVOR) learning framework for multimodal large language models (LLM)
FAVOR simultaneously perceive speech and audio events in the audio input stream and images or videos in the visual input stream, at the frame level.
An interactive demo of FAVOR is available at https://github.com/BriansIDP/AudioVisualLLM.git, and the training code and model checkpoints will be released soon.
arXiv Detail & Related papers (2023-10-09T17:00:20Z) - Auto-ACD: A Large-scale Dataset for Audio-Language Representation Learning [50.28566759231076]
We propose an innovative, automatic approach to establish an audio dataset with high-quality captions.
Specifically, we construct a large-scale, high-quality, audio-language dataset, named as Auto-ACD, comprising over 1.5M audio-text pairs.
We employ LLM to paraphrase a congruent caption for each audio, guided by the extracted multi-modality clues.
arXiv Detail & Related papers (2023-09-20T17:59:32Z) - Text-to-feature diffusion for audio-visual few-shot learning [59.45164042078649]
Few-shot learning from video data is a challenging and underexplored, yet much cheaper, setup.
We introduce a unified audio-visual few-shot video classification benchmark on three datasets.
We show that AV-DIFF obtains state-of-the-art performance on our proposed benchmark for audio-visual few-shot learning.
arXiv Detail & Related papers (2023-09-07T17:30:36Z) - Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video
Understanding [61.80870130860662]
Video-LLaMA is a framework that empowers Large Language Models (LLMs) with the capability of understanding both visual and auditory content in the video.
Video-LLaMA bootstraps cross-modal training from the frozen pre-trained visual and audio encoders and the frozen LLMs.
We found Video-LLaMA shows the ability to perceive and comprehend video content and generate meaningful responses.
arXiv Detail & Related papers (2023-06-05T13:17:27Z) - AudioVisual Video Summarization [103.47766795086206]
In video summarization, existing approaches just exploit the visual information while neglecting the audio information.
We propose to jointly exploit the audio and visual information for the video summarization task, and develop an AudioVisual Recurrent Network (AVRN) to achieve this.
arXiv Detail & Related papers (2021-05-17T08:36:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.