Vision Transformers are Parameter-Efficient Audio-Visual Learners
- URL: http://arxiv.org/abs/2212.07983v2
- Date: Wed, 5 Apr 2023 17:41:12 GMT
- Title: Vision Transformers are Parameter-Efficient Audio-Visual Learners
- Authors: Yan-Bo Lin, Yi-Lin Sung, Jie Lei, Mohit Bansal, Gedas Bertasius
- Abstract summary: We propose a latent audio-visual hybrid (LAVISH) adapter that adapts pretrained ViTs to audio-visual tasks.
Our approach achieves competitive or even better performance on various audio-visual tasks.
- Score: 95.59258503297195
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vision transformers (ViTs) have achieved impressive results on various
computer vision tasks in the last several years. In this work, we study the
capability of frozen ViTs, pretrained only on visual data, to generalize to
audio-visual data without finetuning any of its original parameters. To do so,
we propose a latent audio-visual hybrid (LAVISH) adapter that adapts pretrained
ViTs to audio-visual tasks by injecting a small number of trainable parameters
into every layer of a frozen ViT. To efficiently fuse visual and audio cues,
our LAVISH adapter uses a small set of latent tokens, which form an attention
bottleneck, thus, eliminating the quadratic cost of standard cross-attention.
Compared to the existing modality-specific audio-visual methods, our approach
achieves competitive or even better performance on various audio-visual tasks
while using fewer tunable parameters and without relying on costly audio
pretraining or external audio encoders. Our code is available at
https://genjib.github.io/project_page/LAVISH/
Related papers
- AV-DiT: Efficient Audio-Visual Diffusion Transformer for Joint Audio and Video Generation [33.315479764894086]
We introduce AV-DiT, a novel and efficient audio-visual diffusion transformer.
A shared DiT backbone pre-trained on image-only data facilitates both audio and video generation.
Experiments on the AIST++ and Landscape datasets demonstrate that AV-DiT achieves state-of-the-art performance in joint audio-visual generation.
arXiv Detail & Related papers (2024-06-11T20:05:58Z) - Siamese Vision Transformers are Scalable Audio-visual Learners [19.916919837694802]
We investigate using an audio-visual siamese network (AVSiam) for efficient and scalable audio-visual pretraining.
Our framework uses a single shared vision transformer backbone to process audio and visual inputs.
Our method can robustly handle audio, visual, and audio-visual inputs with a single shared ViT backbone.
arXiv Detail & Related papers (2024-03-28T17:52:24Z) - Text-to-feature diffusion for audio-visual few-shot learning [59.45164042078649]
Few-shot learning from video data is a challenging and underexplored, yet much cheaper, setup.
We introduce a unified audio-visual few-shot video classification benchmark on three datasets.
We show that AV-DIFF obtains state-of-the-art performance on our proposed benchmark for audio-visual few-shot learning.
arXiv Detail & Related papers (2023-09-07T17:30:36Z) - AVSegFormer: Audio-Visual Segmentation with Transformer [42.24135756439358]
A new audio-visual segmentation (AVS) task has been introduced, aiming to locate and segment the sounding objects in a given video.
This task demands audio-driven pixel-level scene understanding for the first time, posing significant challenges.
We propose AVSegFormer, a novel framework for AVS tasks that leverages the transformer architecture.
arXiv Detail & Related papers (2023-07-03T16:37:10Z) - Visually-Guided Sound Source Separation with Audio-Visual Predictive
Coding [57.08832099075793]
Visually-guided sound source separation consists of three parts: visual feature extraction, multimodal feature fusion, and sound signal processing.
This paper presents audio-visual predictive coding (AVPC) to tackle this task in parameter harmonizing and more effective manner.
In addition, we develop a valid self-supervised learning strategy for AVPC via co-predicting two audio-visual representations of the same sound source.
arXiv Detail & Related papers (2023-06-19T03:10:57Z) - Dense-Localizing Audio-Visual Events in Untrimmed Videos: A Large-Scale
Benchmark and Baseline [53.07236039168652]
We focus on the task of dense-localizing audio-visual events, which aims to jointly localize and recognize all audio-visual events occurring in an untrimmed video.
We introduce the first Untrimmed Audio-Visual dataset, which contains 10K untrimmed videos with over 30K audio-visual events.
Next, we formulate the task using a new learning-based framework, which is capable of fully integrating audio and visual modalities to localize audio-visual events with various lengths and capture dependencies between them in a single pass.
arXiv Detail & Related papers (2023-03-22T22:00:17Z) - TVLT: Textless Vision-Language Transformer [89.31422264408002]
We present the Textless Vision-Language Transformer (TVLT), where homogeneous transformer blocks take raw visual and audio inputs.
TVLT attains performance comparable to its text-based counterpart, on various multimodal tasks.
Our findings suggest the possibility of learning compact and efficient visual-linguistic representations from low-level visual and audio signals.
arXiv Detail & Related papers (2022-09-28T15:08:03Z) - Audiomer: A Convolutional Transformer for Keyword Spotting [0.0]
We introduce Audiomer, where we combine 1D Residual Networks with Performer Attention to achieve state-of-the-art performance in Keyword Spotting.
Audiomer allows for deployment in compute-constrained devices and training on smaller datasets.
arXiv Detail & Related papers (2021-09-21T15:28:41Z) - Unsupervised Audiovisual Synthesis via Exemplar Autoencoders [59.13989658692953]
We present an unsupervised approach that converts the input speech of any individual into audiovisual streams of potentially-infinitely many output speakers.
We use Exemplar Autoencoders to learn the voice, stylistic prosody, and visual appearance of a specific target speech exemplar.
arXiv Detail & Related papers (2020-01-13T18:56:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.