FLAP: Fast Language-Audio Pre-training
- URL: http://arxiv.org/abs/2311.01615v1
- Date: Thu, 2 Nov 2023 21:58:50 GMT
- Title: FLAP: Fast Language-Audio Pre-training
- Authors: Ching-Feng Yeh, Po-Yao Huang, Vasu Sharma, Shang-Wen Li and Gargi Gosh
- Abstract summary: We propose Fast Language-Audio Pre-training (FLAP), a self-supervised approach that efficiently learns aligned audio and language representations.
For efficiency, FLAP randomly drops audio spectrogram tokens, focusing solely on the remaining ones for self-supervision.
FLAP learns to align paired audio and text representations in a shared latent space.
- Score: 16.46254370386555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose Fast Language-Audio Pre-training (FLAP), a self-supervised
approach that efficiently and effectively learns aligned audio and language
representations through masking, contrastive learning and reconstruction. For
efficiency, FLAP randomly drops audio spectrogram tokens, focusing solely on
the remaining ones for self-supervision. Through inter-modal contrastive
learning, FLAP learns to align paired audio and text representations in a
shared latent space. Notably, FLAP leverages multiple augmented views via
masking for inter-modal contrast and learns to reconstruct the masked portion
of audio tokens. Moreover, FLAP leverages large language models (LLMs) to
augment the text inputs, contributing to improved performance. These approaches
lead to more robust and informative audio-text representations, enabling FLAP
to achieve state-of-the-art (SoTA) performance on audio-text retrieval tasks on
AudioCaps (achieving 53.0% R@1) and Clotho (achieving 25.5% R@1).
Related papers
- PALM: Few-Shot Prompt Learning for Audio Language Models [1.6177972328875514]
Audio-Language Models (ALMs) have recently achieved remarkable success in zero-shot audio recognition tasks.
We propose a novel method, Prompt Learning in Audio Language Models (PALM), which optimize the feature space of the text encoder branch.
We demonstrate the effectiveness of our approach on 11 audio recognition datasets, and compare the results with three baselines in a few-shot learning setup.
arXiv Detail & Related papers (2024-09-29T22:06:07Z) - Audio-visual Generalized Zero-shot Learning the Easy Way [20.60905505473906]
We introduce EZ-AVGZL, which aligns audio-visual embeddings with transformed text representations.
We conduct extensive experiments on VGGSound-GZSL, UCF-GZSL, and ActivityNet-GZSL benchmarks.
arXiv Detail & Related papers (2024-07-18T01:57:16Z) - Unified Video-Language Pre-training with Synchronized Audio [21.607860535968356]
We propose an enhanced framework for Video-Language pre-training with Synchronized Audio.
Our framework learns tri-modal representations in a unified self-supervised transformer.
Our model pre-trained on only 0.9M data achieves improving results against state-of-the-art baselines.
arXiv Detail & Related papers (2024-05-12T07:59:46Z) - XLAVS-R: Cross-Lingual Audio-Visual Speech Representation Learning for Noise-Robust Speech Perception [62.660135152900615]
Speech recognition and translation systems perform poorly on noisy inputs.
XLAVS-R is a cross-lingual audio-visual speech representation model for noise-robust speech recognition and translation.
arXiv Detail & Related papers (2024-03-21T13:52:17Z) - Where Visual Speech Meets Language: VSP-LLM Framework for Efficient and Context-Aware Visual Speech Processing [56.71450690166821]
We propose a novel framework, namely Visual Speech Processing incorporated with LLMs (VSP-LLM)
VSP-LLM is designed to perform multi-tasks of visual speech recognition and translation.
We show that VSP-LLM trained on just 30 hours of labeled data can more effectively translate lip movements.
arXiv Detail & Related papers (2024-02-23T07:21:32Z) - Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities [37.02115473120654]
Augmenting large language models (LLMs) to understand audio is critically important for diverse real-world applications.
In this paper, we propose Audio Flamingo, a novel audio language model with 1) strong audio understanding abilities, 2) the ability to quickly adapt to unseen tasks via in-context learning and retrieval, and 3) strong multi-turn dialogue abilities.
arXiv Detail & Related papers (2024-02-02T18:58:34Z) - Weakly-supervised Automated Audio Captioning via text only training [1.504795651143257]
We propose a weakly-supervised approach to train an AAC model assuming only text data and a pre-trained CLAP model.
We evaluate our proposed method on Clotho and AudioCaps datasets demonstrating its ability to achieve a relative performance of up to $83%$ compared to fully supervised approaches.
arXiv Detail & Related papers (2023-09-21T16:40:46Z) - AudioPaLM: A Large Language Model That Can Speak and Listen [79.44757696533709]
We introduce AudioPaLM, a large language model for speech understanding and generation.
AudioPaLM fuses text-based and speech-based language models.
It can process and generate text and speech with applications including speech recognition and speech-to-speech translation.
arXiv Detail & Related papers (2023-06-22T14:37:54Z) - Exploring the Role of Audio in Video Captioning [59.679122191706426]
We present an audio-visual framework, which aims to fully exploit the potential of the audio modality for captioning.
We propose new local-global fusion mechanisms to improve information exchange across audio and video.
arXiv Detail & Related papers (2023-06-21T20:54:52Z) - MixSpeech: Cross-Modality Self-Learning with Audio-Visual Stream Mixup
for Visual Speech Translation and Recognition [51.412413996510814]
We propose MixSpeech, a cross-modality self-learning framework that utilizes audio speech to regularize the training of visual speech tasks.
MixSpeech enhances speech translation in noisy environments, improving BLEU scores for four languages on AVMuST-TED by +1.4 to +4.2.
arXiv Detail & Related papers (2023-03-09T14:58:29Z) - VATLM: Visual-Audio-Text Pre-Training with Unified Masked Prediction for
Speech Representation Learning [119.49605266839053]
We propose a unified cross-modal representation learning framework VATLM (Visual-Audio-Text Language Model)
The proposed VATLM employs a unified backbone network to model the modality-independent information.
In order to integrate these three modalities into one shared semantic space, VATLM is optimized with a masked prediction task of unified tokens.
arXiv Detail & Related papers (2022-11-21T09:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.