Diffusion Models as Masked Audio-Video Learners
- URL: http://arxiv.org/abs/2310.03937v2
- Date: Thu, 4 Jan 2024 21:39:25 GMT
- Title: Diffusion Models as Masked Audio-Video Learners
- Authors: Elvis Nunez, Yanzi Jin, Mohammad Rastegari, Sachin Mehta, Maxwell
Horton
- Abstract summary: Masked Audio-Video learners (MAViL) has emerged as a state-of-the-art audio-video pre-training framework.
In this paper, we study the potential synergy between diffusion models and MAViL.
- Score: 27.22726553443404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the past several years, the synchronization between audio and visual
signals has been leveraged to learn richer audio-visual representations. Aided
by the large availability of unlabeled videos, many unsupervised training
frameworks have demonstrated impressive results in various downstream audio and
video tasks. Recently, Masked Audio-Video Learners (MAViL) has emerged as a
state-of-the-art audio-video pre-training framework. MAViL couples contrastive
learning with masked autoencoding to jointly reconstruct audio spectrograms and
video frames by fusing information from both modalities. In this paper, we
study the potential synergy between diffusion models and MAViL, seeking to
derive mutual benefits from these two frameworks. The incorporation of
diffusion into MAViL, combined with various training efficiency methodologies
that include the utilization of a masking ratio curriculum and adaptive batch
sizing, results in a notable 32% reduction in pre-training Floating-Point
Operations (FLOPS) and an 18% decrease in pre-training wall clock time.
Crucially, this enhanced efficiency does not compromise the model's performance
in downstream audio-classification tasks when compared to MAViL's performance.
Related papers
- Mutual Learning for Acoustic Matching and Dereverberation via Visual Scene-driven Diffusion [93.32354378820648]
We introduce MVSD, a mutual learning framework based on diffusion models.
MVSD considers the two tasks symmetrically, exploiting the reciprocal relationship to facilitate learning from inverse tasks.
Our framework can improve the performance of the reverberator and dereverberator.
arXiv Detail & Related papers (2024-07-15T00:47:56Z) - CMMD: Contrastive Multi-Modal Diffusion for Video-Audio Conditional Modeling [21.380988939240844]
We introduce a multi-modal diffusion model tailored for the bi-directional conditional generation of video and audio.
We propose a joint contrastive training loss to improve the synchronization between visual and auditory occurrences.
arXiv Detail & Related papers (2023-12-08T23:55:19Z) - Fine-grained Audio-Visual Joint Representations for Multimodal Large
Language Models [25.660343393359565]
This paper proposes a fine-grained audio-visual joint representation (FAVOR) learning framework for multimodal large language models (LLM)
FAVOR simultaneously perceive speech and audio events in the audio input stream and images or videos in the visual input stream, at the frame level.
An interactive demo of FAVOR is available at https://github.com/BriansIDP/AudioVisualLLM.git, and the training code and model checkpoints will be released soon.
arXiv Detail & Related papers (2023-10-09T17:00:20Z) - Improving Audio-Visual Speech Recognition by Lip-Subword Correlation
Based Visual Pre-training and Cross-Modal Fusion Encoder [58.523884148942166]
We propose two novel techniques to improve audio-visual speech recognition (AVSR) under a pre-training and fine-tuning training framework.
First, we explore the correlation between lip shapes and syllable-level subword units in Mandarin to establish good frame-level syllable boundaries from lip shapes.
Next, we propose an audio-guided cross-modal fusion encoder (CMFE) neural network to utilize main training parameters for multiple cross-modal attention layers.
arXiv Detail & Related papers (2023-08-14T08:19:24Z) - MAViL: Masked Audio-Video Learners [68.61844803682145]
We present Masked Audio-Video learners (MAViL) to train audio-visual representations.
Pre-training with MAViL enables the model to perform well in audio-visual classification and retrieval tasks.
For the first time, a self-supervised audio-visual model outperforms ones that use external supervision on benchmarks.
arXiv Detail & Related papers (2022-12-15T18:59:59Z) - Contrastive Audio-Visual Masked Autoencoder [85.53776628515561]
Contrastive Audio-Visual Masked Auto-Encoder (CAV-MAE)
Our fully self-supervised pretrained CAV-MAE achieves a new SOTA accuracy of 65.9% on VGGSound.
arXiv Detail & Related papers (2022-10-02T07:29:57Z) - A study on joint modeling and data augmentation of multi-modalities for
audio-visual scene classification [64.59834310846516]
We propose two techniques, namely joint modeling and data augmentation, to improve system performances for audio-visual scene classification (AVSC)
Our final system can achieve the best accuracy of 94.2% among all single AVSC systems submitted to DCASE 2021 Task 1b.
arXiv Detail & Related papers (2022-03-07T07:29:55Z) - The Impact of Spatiotemporal Augmentations on Self-Supervised
Audiovisual Representation Learning [2.28438857884398]
We present a contrastive framework to learn audiovisual representations from unlabeled videos.
We find lossy-temporal transformations that do not corrupt the temporal coherency of videos are the most effective.
Compared to self-supervised models pre-trained on only sampling-based temporal augmentation, self-supervised models pre-trained with our temporal augmentations lead to approximately 6.5% gain on linear performance on dataset AVE.
arXiv Detail & Related papers (2021-10-13T23:48:58Z) - AudioVisual Video Summarization [103.47766795086206]
In video summarization, existing approaches just exploit the visual information while neglecting the audio information.
We propose to jointly exploit the audio and visual information for the video summarization task, and develop an AudioVisual Recurrent Network (AVRN) to achieve this.
arXiv Detail & Related papers (2021-05-17T08:36:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.