Look, Listen, and Attend: Co-Attention Network for Self-Supervised
Audio-Visual Representation Learning
- URL: http://arxiv.org/abs/2008.05789v1
- Date: Thu, 13 Aug 2020 10:08:12 GMT
- Title: Look, Listen, and Attend: Co-Attention Network for Self-Supervised
Audio-Visual Representation Learning
- Authors: Ying Cheng, Ruize Wang, Zhihao Pan, Rui Feng, Yuejie Zhang
- Abstract summary: An underlying correlation between audio and visual events can be utilized as free supervised information to train a neural network.
We propose a novel self-supervised framework with co-attention mechanism to learn generic cross-modal representations from unlabelled videos.
Experiments show that our model achieves state-of-the-art performance on the pretext task while having fewer parameters compared with existing methods.
- Score: 17.6311804187027
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When watching videos, the occurrence of a visual event is often accompanied
by an audio event, e.g., the voice of lip motion, the music of playing
instruments. There is an underlying correlation between audio and visual
events, which can be utilized as free supervised information to train a neural
network by solving the pretext task of audio-visual synchronization. In this
paper, we propose a novel self-supervised framework with co-attention mechanism
to learn generic cross-modal representations from unlabelled videos in the
wild, and further benefit downstream tasks. Specifically, we explore three
different co-attention modules to focus on discriminative visual regions
correlated to the sounds and introduce the interactions between them.
Experiments show that our model achieves state-of-the-art performance on the
pretext task while having fewer parameters compared with existing methods. To
further evaluate the generalizability and transferability of our approach, we
apply the pre-trained model on two downstream tasks, i.e., sound source
localization and action recognition. Extensive experiments demonstrate that our
model provides competitive results with other self-supervised methods, and also
indicate that our approach can tackle the challenging scenes which contain
multiple sound sources.
Related papers
- Language-Guided Audio-Visual Source Separation via Trimodal Consistency [64.0580750128049]
A key challenge in this task is learning to associate the linguistic description of a sound-emitting object to its visual features and the corresponding components of the audio waveform.
We adapt off-the-shelf vision-language foundation models to provide pseudo-target supervision via two novel loss functions.
We demonstrate the effectiveness of our self-supervised approach on three audio-visual separation datasets.
arXiv Detail & Related papers (2023-03-28T22:45:40Z) - Self-Supervised Audio-Visual Representation Learning with Relaxed
Cross-Modal Temporal Synchronicity [12.995632804090198]
CrissCross is a self-supervised framework for learning audio-visual representations.
We show that by relaxing the temporal synchronicity between the audio and visual modalities, the network learns strong time-invariant representations.
arXiv Detail & Related papers (2021-11-09T20:24:19Z) - LiRA: Learning Visual Speech Representations from Audio through
Self-supervision [53.18768477520411]
We propose Learning visual speech Representations from Audio via self-supervision (LiRA)
Specifically, we train a ResNet+Conformer model to predict acoustic features from unlabelled visual speech.
We show that our approach significantly outperforms other self-supervised methods on the Lip Reading in the Wild dataset.
arXiv Detail & Related papers (2021-06-16T23:20:06Z) - Learning Audio-Visual Correlations from Variational Cross-Modal
Generation [35.07257471319274]
We learn the audio-visual correlations from the perspective of cross-modal generation in a self-supervised manner.
The learned correlations can be readily applied in multiple downstream tasks such as the audio-visual cross-modal localization and retrieval.
arXiv Detail & Related papers (2021-02-05T21:27:00Z) - Self-Supervised Learning of Audio-Visual Objects from Video [108.77341357556668]
We introduce a model that uses attention to localize and group sound sources, and optical flow to aggregate information over time.
We demonstrate the effectiveness of the audio-visual object embeddings that our model learns by using them for four downstream speech-oriented tasks.
arXiv Detail & Related papers (2020-08-10T16:18:01Z) - Learning Speech Representations from Raw Audio by Joint Audiovisual
Self-Supervision [63.564385139097624]
We propose a method to learn self-supervised speech representations from the raw audio waveform.
We train a raw audio encoder by combining audio-only self-supervision (by predicting informative audio attributes) with visual self-supervision (by generating talking faces from audio)
Our results demonstrate the potential of multimodal self-supervision in audiovisual speech for learning good audio representations.
arXiv Detail & Related papers (2020-07-08T14:07:06Z) - Curriculum Audiovisual Learning [113.20920928789867]
We present a flexible audiovisual model that introduces a soft-clustering module as the audio and visual content detector.
To ease the difficulty of audiovisual learning, we propose a novel learning strategy that trains the model from simple to complex scene.
We show that our localization model significantly outperforms existing methods, based on which we show comparable performance in sound separation without referring external visual supervision.
arXiv Detail & Related papers (2020-01-26T07:08:47Z) - Visually Guided Self Supervised Learning of Speech Representations [62.23736312957182]
We propose a framework for learning audio representations guided by the visual modality in the context of audiovisual speech.
We employ a generative audio-to-video training scheme in which we animate a still image corresponding to a given audio clip and optimize the generated video to be as close as possible to the real video of the speech segment.
We achieve state of the art results for emotion recognition and competitive results for speech recognition.
arXiv Detail & Related papers (2020-01-13T14:53:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.