Masked Autoencoders that Listen
- URL: http://arxiv.org/abs/2207.06405v1
- Date: Wed, 13 Jul 2022 17:59:55 GMT
- Title: Masked Autoencoders that Listen
- Authors: Po-Yao (Bernie) Huang, Hu Xu, Juncheng Li, Alexei Baevski, Michael
Auli, Wojciech Galuba, Florian Metze, Christoph Feichtenhofer
- Abstract summary: This paper studies a simple extension of image-based Masked Autoencoders (MAE) to self-supervised representation learning from audio spectrograms.
Following the Transformer encoder-decoder design in MAE, our Audio-MAE first encodes audio spectrogram patches with a high masking ratio, feeding only the non-masked tokens through encoder layers.
The decoder then re-orders and decodes the encoded context padded with mask tokens, in order to reconstruct the input spectrogram.
- Score: 79.99280830830854
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper studies a simple extension of image-based Masked Autoencoders
(MAE) to self-supervised representation learning from audio spectrograms.
Following the Transformer encoder-decoder design in MAE, our Audio-MAE first
encodes audio spectrogram patches with a high masking ratio, feeding only the
non-masked tokens through encoder layers. The decoder then re-orders and
decodes the encoded context padded with mask tokens, in order to reconstruct
the input spectrogram. We find it beneficial to incorporate local window
attention in the decoder, as audio spectrograms are highly correlated in local
time and frequency bands. We then fine-tune the encoder with a lower masking
ratio on target datasets. Empirically, Audio-MAE sets new state-of-the-art
performance on six audio and speech classification tasks, outperforming other
recent models that use external supervised pre-training. The code and models
will be at https://github.com/facebookresearch/AudioMAE.
Related papers
- Hold Me Tight: Stable Encoder-Decoder Design for Speech Enhancement [1.4037575966075835]
1-D filters on raw audio are hard to train and often suffer from instabilities.
We address these problems with hybrid solutions, combining theory-driven and data-driven approaches.
arXiv Detail & Related papers (2024-08-30T15:49:31Z) - WavTokenizer: an Efficient Acoustic Discrete Codec Tokenizer for Audio Language Modeling [65.30937248905958]
A crucial component of language models is the tokenizer, which compresses high-dimensional natural signals into lower-dimensional discrete tokens.
We introduce WavTokenizer, which offers several advantages over previous SOTA acoustic models in the audio domain.
WavTokenizer achieves state-of-the-art reconstruction quality with outstanding UTMOS scores and inherently contains richer semantic information.
arXiv Detail & Related papers (2024-08-29T13:43:36Z) - Rethinking Patch Dependence for Masked Autoencoders [92.37365660775171]
We re-examine inter-patch dependencies in the decoding mechanism of masked autoencoders (MAE)
We propose a novel pretraining framework: Cross-Attention Masked Autoencoders (CrossMAE)
arXiv Detail & Related papers (2024-01-25T18:49:57Z) - A-JEPA: Joint-Embedding Predictive Architecture Can Listen [35.308323314848735]
We introduce Audio-based Joint-Embedding Predictive Architecture (A-JEPA), a simple extension method for self-supervised learning from the audio spectrum.
A-JEPA encodes visible audio spectrogram patches with a curriculum masking strategy via context encoder, and predicts the representations of regions sampled at well-designed locations.
arXiv Detail & Related papers (2023-11-27T13:53:53Z) - Large-scale unsupervised audio pre-training for video-to-speech
synthesis [64.86087257004883]
Video-to-speech synthesis is the task of reconstructing the speech signal from a silent video of a speaker.
In this paper we propose to train encoder-decoder models on more than 3,500 hours of audio data at 24kHz.
We then use the pre-trained decoders to initialize the audio decoders for the video-to-speech synthesis task.
arXiv Detail & Related papers (2023-06-27T13:31:33Z) - Diffsound: Discrete Diffusion Model for Text-to-sound Generation [78.4128796899781]
We propose a novel text-to-sound generation framework that consists of a text encoder, a Vector Quantized Variational Autoencoder (VQ-VAE), a decoder, and a vocoder.
The framework first uses the decoder to transfer the text features extracted from the text encoder to a mel-spectrogram with the help of VQ-VAE, and then the vocoder is used to transform the generated mel-spectrogram into a waveform.
arXiv Detail & Related papers (2022-07-20T15:41:47Z) - MAE-AST: Masked Autoencoding Audio Spectrogram Transformer [11.814012909512307]
We propose a simple yet powerful improvement over the recent Self-Supervised Audio Spectrogram Transformer (SSAST) model for speech and audio classification.
We leverage the insight that the SSAST uses a very high masking ratio (75%) during pretraining, meaning that the vast majority of self-attention compute is performed on mask tokens.
We find that MAE-like pretraining can provide a 3x speedup and 2x memory usage reduction over the vanilla SSAST.
arXiv Detail & Related papers (2022-03-30T22:06:13Z) - Context Autoencoder for Self-Supervised Representation Learning [64.63908944426224]
We pretrain an encoder by making predictions in the encoded representation space.
The network is an encoder-regressor-decoder architecture.
We demonstrate the effectiveness of our CAE through superior transfer performance in downstream tasks.
arXiv Detail & Related papers (2022-02-07T09:33:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.