Unveiling the Power of Audio-Visual Early Fusion Transformers with Dense
Interactions through Masked Modeling
- URL: http://arxiv.org/abs/2312.01017v1
- Date: Sat, 2 Dec 2023 03:38:49 GMT
- Title: Unveiling the Power of Audio-Visual Early Fusion Transformers with Dense
Interactions through Masked Modeling
- Authors: Shentong Mo, Pedro Morgado
- Abstract summary: Humans possess a remarkable ability to integrate auditory and visual information, enabling a deeper understanding of the surrounding environment.
This early fusion of audio and visual cues, demonstrated through cognitive psychology and neuroscience research, offers promising potential for developing multimodal perception models.
We address training early fusion architectures by leveraging the masked reconstruction framework, previously successful in unimodal settings, to train audio-visual encoders with early fusion.
We propose an attention-based fusion module that captures interactions between local audio and visual representations, enhancing the model's ability to capture fine-grained interactions.
- Score: 24.346868432774453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans possess a remarkable ability to integrate auditory and visual
information, enabling a deeper understanding of the surrounding environment.
This early fusion of audio and visual cues, demonstrated through cognitive
psychology and neuroscience research, offers promising potential for developing
multimodal perception models. However, training early fusion architectures
poses significant challenges, as the increased model expressivity requires
robust learning frameworks to harness their enhanced capabilities. In this
paper, we address this challenge by leveraging the masked reconstruction
framework, previously successful in unimodal settings, to train audio-visual
encoders with early fusion. Additionally, we propose an attention-based fusion
module that captures interactions between local audio and visual
representations, enhancing the model's ability to capture fine-grained
interactions. While effective, this procedure can become computationally
intractable, as the number of local representations increases. Thus, to address
the computational complexity, we propose an alternative procedure that
factorizes the local representations before representing audio-visual
interactions. Extensive evaluations on a variety of datasets demonstrate the
superiority of our approach in audio-event classification, visual sound
localization, sound separation, and audio-visual segmentation. These
contributions enable the efficient training of deeply integrated audio-visual
models and significantly advance the usefulness of early fusion architectures.
Related papers
- Audio-Visual Person Verification based on Recursive Fusion of Joint Cross-Attention [3.5803801804085347]
We introduce a joint cross-attentional model, where a joint audio-visual feature representation is employed in the cross-attention framework.
We also explore BLSTMs to improve the temporal modeling of audio-visual feature representations.
Results indicate that the proposed model shows promising improvement in fusion performance by adeptly capturing the intra-and inter-modal relationships.
arXiv Detail & Related papers (2024-03-07T16:57:45Z) - Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion
Latent Aligners [69.70590867769408]
Video and audio content creation serves as the core technique for the movie industry and professional users.
Existing diffusion-based methods tackle video and audio generation separately, which hinders the technique transfer from academia to industry.
In this work, we aim at filling the gap, with a carefully designed optimization-based framework for cross-visual-audio and joint-visual-audio generation.
arXiv Detail & Related papers (2024-02-27T17:57:04Z) - Audio-Visual Speaker Verification via Joint Cross-Attention [4.229744884478575]
Cross-modal joint attention to fully leverage the inter-modal complementary information and the intra-modal information for speaker verification.
We have shown that efficiently leveraging the intra- and inter-modal relationships significantly improves the performance of audio-visual fusion for speaker verification.
arXiv Detail & Related papers (2023-09-28T16:25:29Z) - AV-SUPERB: A Multi-Task Evaluation Benchmark for Audio-Visual Representation Models [92.92233932921741]
We propose the AV-SUPERB benchmark that enables general-purpose evaluation of unimodal audio/visual and bimodal fusion representations.
We evaluate 5 recent self-supervised models and show that none of these models generalize to all tasks.
We show that representations may be improved with intermediate-task fine-tuning and audio event classification with AudioSet serves as a strong intermediate task.
arXiv Detail & Related papers (2023-09-19T17:35:16Z) - Prompting Segmentation with Sound Is Generalizable Audio-Visual Source
Localizer [22.846623384472377]
We introduce the encoder-prompt-decoder paradigm to decode localization from the fused audio-visual feature.
Specifically, we first propose to construct Semantic-aware Audio Prompt (SAP) to help the visual foundation model focus on sounding objects.
We develop a Correlation Adapter (ColA) to keep minimal training efforts as well as maintain adequate knowledge of the visual foundation model.
arXiv Detail & Related papers (2023-09-13T05:43:35Z) - Improving Audio-Visual Speech Recognition by Lip-Subword Correlation
Based Visual Pre-training and Cross-Modal Fusion Encoder [58.523884148942166]
We propose two novel techniques to improve audio-visual speech recognition (AVSR) under a pre-training and fine-tuning training framework.
First, we explore the correlation between lip shapes and syllable-level subword units in Mandarin to establish good frame-level syllable boundaries from lip shapes.
Next, we propose an audio-guided cross-modal fusion encoder (CMFE) neural network to utilize main training parameters for multiple cross-modal attention layers.
arXiv Detail & Related papers (2023-08-14T08:19:24Z) - Audio-video fusion strategies for active speaker detection in meetings [5.61861182374067]
We propose two types of fusion for the detection of the active speaker, combining two visual modalities and an audio modality through neural networks.
For our application context, adding motion information greatly improves performance.
We have shown that attention-based fusion improves performance while reducing the standard deviation.
arXiv Detail & Related papers (2022-06-09T08:20:52Z) - Single-Layer Vision Transformers for More Accurate Early Exits with Less
Overhead [88.17413955380262]
We introduce a novel architecture for early exiting based on the vision transformer architecture.
We show that our method works for both classification and regression problems.
We also introduce a novel method for integrating audio and visual modalities within early exits in audiovisual data analysis.
arXiv Detail & Related papers (2021-05-19T13:30:34Z) - Data Fusion for Audiovisual Speaker Localization: Extending Dynamic
Stream Weights to the Spatial Domain [103.3388198420822]
Esting the positions of multiple speakers can be helpful for tasks like automatic speech recognition or speaker diarization.
This paper proposes a novel audiovisual data fusion framework for speaker localization by assigning individual dynamic stream weights to specific regions.
A performance evaluation using audiovisual recordings yields promising results, with the proposed fusion approach outperforming all baseline models.
arXiv Detail & Related papers (2021-02-23T09:59:31Z) - Audio-Visual Event Localization via Recursive Fusion by Joint
Co-Attention [25.883429290596556]
The major challenge in audio-visual event localization task lies in how to fuse information from multiple modalities effectively.
Recent works have shown that attention mechanism is beneficial to the fusion process.
We propose a novel joint attention mechanism with multimodal fusion methods for audio-visual event localization.
arXiv Detail & Related papers (2020-08-14T21:50:26Z) - Curriculum Audiovisual Learning [113.20920928789867]
We present a flexible audiovisual model that introduces a soft-clustering module as the audio and visual content detector.
To ease the difficulty of audiovisual learning, we propose a novel learning strategy that trains the model from simple to complex scene.
We show that our localization model significantly outperforms existing methods, based on which we show comparable performance in sound separation without referring external visual supervision.
arXiv Detail & Related papers (2020-01-26T07:08:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.