Codified audio language modeling learns useful representations for music
information retrieval
- URL: http://arxiv.org/abs/2107.05677v1
- Date: Mon, 12 Jul 2021 18:28:50 GMT
- Title: Codified audio language modeling learns useful representations for music
information retrieval
- Authors: Rodrigo Castellon and Chris Donahue and Percy Liang
- Abstract summary: We show that language models pre-trained on codified (discretely-encoded) music audio learn representations that are useful for downstream MIR tasks.
To determine if Jukebox's representations contain useful information for MIR, we use them as input features to train shallow models on several MIR tasks.
We observe that representations from Jukebox are considerably stronger than those from models pre-trained on tagging, suggesting that pre-training via codified audio language modeling may address blind spots in conventional approaches.
- Score: 77.63657430536593
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We demonstrate that language models pre-trained on codified
(discretely-encoded) music audio learn representations that are useful for
downstream MIR tasks. Specifically, we explore representations from Jukebox
(Dhariwal et al. 2020): a music generation system containing a language model
trained on codified audio from 1M songs. To determine if Jukebox's
representations contain useful information for MIR, we use them as input
features to train shallow models on several MIR tasks. Relative to
representations from conventional MIR models which are pre-trained on tagging,
we find that using representations from Jukebox as input features yields 30%
stronger performance on average across four MIR tasks: tagging, genre
classification, emotion recognition, and key detection. For key detection, we
observe that representations from Jukebox are considerably stronger than those
from models pre-trained on tagging, suggesting that pre-training via codified
audio language modeling may address blind spots in conventional approaches. We
interpret the strength of Jukebox's representations as evidence that modeling
audio instead of tags provides richer representations for MIR.
Related papers
- A Novel Audio Representation for Music Genre Identification in MIR [3.203495505471781]
For Music Information Retrieval downstream tasks, the most common audio representation is time-frequency-based, such as Mel spectrograms.
This study explores the possibilities of a new form of audio representation one of the most usual MIR downstream tasks.
A novel audio representation was created for the innovative generative music model i.e. Jukebox.
The effectiveness of Jukebox's audio representation is compared to Mel spectrograms using a dataset that is almost equivalent to State-of-the-Art (SOTA) and an almost same transformer design.
arXiv Detail & Related papers (2024-04-01T11:40:09Z) - Text-to-feature diffusion for audio-visual few-shot learning [59.45164042078649]
Few-shot learning from video data is a challenging and underexplored, yet much cheaper, setup.
We introduce a unified audio-visual few-shot video classification benchmark on three datasets.
We show that AV-DIFF obtains state-of-the-art performance on our proposed benchmark for audio-visual few-shot learning.
arXiv Detail & Related papers (2023-09-07T17:30:36Z) - AudioFormer: Audio Transformer learns audio feature representations from
discrete acoustic codes [6.375996974877916]
We propose a method named AudioFormer, which learns audio feature representations through the acquisition of discrete acoustic codes.
Our research outcomes demonstrate that AudioFormer attains significantly improved performance compared to prevailing monomodal audio classification models.
arXiv Detail & Related papers (2023-08-14T15:47:25Z) - AVFormer: Injecting Vision into Frozen Speech Models for Zero-Shot
AV-ASR [79.21857972093332]
We present AVFormer, a method for augmenting audio-only models with visual information, at the same time performing lightweight domain adaptation.
We show that these can be trained on a small amount of weakly labelled video data with minimum additional training time and parameters.
We also introduce a simple curriculum scheme during training which we show is crucial to enable the model to jointly process audio and visual information effectively.
arXiv Detail & Related papers (2023-03-29T07:24:28Z) - VATLM: Visual-Audio-Text Pre-Training with Unified Masked Prediction for
Speech Representation Learning [119.49605266839053]
We propose a unified cross-modal representation learning framework VATLM (Visual-Audio-Text Language Model)
The proposed VATLM employs a unified backbone network to model the modality-independent information.
In order to integrate these three modalities into one shared semantic space, VATLM is optimized with a masked prediction task of unified tokens.
arXiv Detail & Related papers (2022-11-21T09:10:10Z) - Contrastive Audio-Visual Masked Autoencoder [85.53776628515561]
Contrastive Audio-Visual Masked Auto-Encoder (CAV-MAE)
Our fully self-supervised pretrained CAV-MAE achieves a new SOTA accuracy of 65.9% on VGGSound.
arXiv Detail & Related papers (2022-10-02T07:29:57Z) - Learning music audio representations via weak language supervision [14.335950077921435]
We design a multimodal architecture for music and language pre-training (MuLaP) optimised via a set of proxy tasks.
weak supervision is provided in the form of noisy natural language descriptions conveying the overall musical content of the track.
We demonstrate the usefulness of our approach by comparing the performance of audio representations produced by the same audio backbone with different training strategies.
arXiv Detail & Related papers (2021-12-08T10:30:52Z) - Audio ALBERT: A Lite BERT for Self-supervised Learning of Audio
Representation [51.37980448183019]
We propose Audio ALBERT, a lite version of the self-supervised speech representation model.
We show that Audio ALBERT is capable of achieving competitive performance with those huge models in the downstream tasks.
In probing experiments, we find that the latent representations encode richer information of both phoneme and speaker than that of the last layer.
arXiv Detail & Related papers (2020-05-18T10:42:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.