Speech Representation Learning Through Self-supervised Pretraining And
Multi-task Finetuning
- URL: http://arxiv.org/abs/2110.09930v1
- Date: Mon, 18 Oct 2021 07:16:04 GMT
- Title: Speech Representation Learning Through Self-supervised Pretraining And
Multi-task Finetuning
- Authors: Yi-Chen Chen, Shu-wen Yang, Cheng-Kuang Lee, Simon See, Hung-yi Lee
- Abstract summary: We show that MTL finetuning can further improve SSL pretraining.
We analyze the generalizability of supervised MTL finetuning to examine if the speech representation learned by MTL finetuning can generalize to unseen new tasks.
- Score: 63.38155671200249
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Speech representation learning plays a vital role in speech processing. Among
them, self-supervised learning (SSL) has become an important research
direction. It has been shown that an SSL pretraining model can achieve
excellent performance in various downstream tasks of speech processing. On the
other hand, supervised multi-task learning (MTL) is another representation
learning paradigm, which has been proven effective in computer vision (CV) and
natural language processing (NLP). However, there is no systematic research on
the general representation learning model trained by supervised MTL in speech
processing. In this paper, we show that MTL finetuning can further improve SSL
pretraining. We analyze the generalizability of supervised MTL finetuning to
examine if the speech representation learned by MTL finetuning can generalize
to unseen new tasks.
Related papers
- Layer by Layer: Uncovering Where Multi-Task Learning Happens in Instruction-Tuned Large Language Models [22.676688441884465]
Fine-tuning pre-trained large language models (LLMs) on a diverse array of tasks has become a common approach for building models.
This study investigates the task-specific information encoded in pre-trained LLMs and the effects of instruction tuning on their representations.
arXiv Detail & Related papers (2024-10-25T23:38:28Z) - Developing Instruction-Following Speech Language Model Without Speech Instruction-Tuning Data [84.01401439030265]
Recent end-to-end speech language models (SLMs) have expanded upon the capabilities of large language models (LLMs)
We present a simple yet effective automatic process for creating speech-text pair data.
Our model demonstrates general capabilities for speech-related tasks without the need for speech instruction-tuning data.
arXiv Detail & Related papers (2024-09-30T07:01:21Z) - SINC: Self-Supervised In-Context Learning for Vision-Language Tasks [64.44336003123102]
We propose a framework to enable in-context learning in large language models.
A meta-model can learn on self-supervised prompts consisting of tailored demonstrations.
Experiments show that SINC outperforms gradient-based methods in various vision-language tasks.
arXiv Detail & Related papers (2023-07-15T08:33:08Z) - What In-Context Learning "Learns" In-Context: Disentangling Task
Recognition and Task Learning [24.395288160951118]
Large language models (LLMs) exploit in-context learning (ICL) to solve tasks with only a few demonstrations.
We characterize two ways through which ICL leverages demonstrations.
We show that models can achieve non-trivial performance with only TR, and TR does not further improve with larger models or more demonstrations.
arXiv Detail & Related papers (2023-05-16T18:05:19Z) - Seeing What You Miss: Vision-Language Pre-training with Semantic
Completion Learning [22.464424641734652]
Cross-modal alignment is essential for vision-language pre-training models.
We propose a novel Semantic Completion Learning task to facilitate global-to-local alignment.
We also present a flexible vision encoder, which enables our model to perform image-text and video-text multimodal tasks simultaneously.
arXiv Detail & Related papers (2022-11-24T06:39:16Z) - An Exploration of Prompt Tuning on Generative Spoken Language Model for
Speech Processing Tasks [112.1942546460814]
We report the first exploration of the prompt tuning paradigm for speech processing tasks based on Generative Spoken Language Model (GSLM)
Experiment results show that the prompt tuning technique achieves competitive performance in speech classification tasks with fewer trainable parameters than fine-tuning specialized downstream models.
arXiv Detail & Related papers (2022-03-31T03:26:55Z) - UniSpeech-SAT: Universal Speech Representation Learning with Speaker
Aware Pre-Training [72.004873454347]
Two methods are introduced for enhancing the unsupervised speaker information extraction.
Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance.
We scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement.
arXiv Detail & Related papers (2021-10-12T05:43:30Z) - SUPERB: Speech processing Universal PERformance Benchmark [78.41287216481203]
Self-supervised learning (SSL) has proven vital for advancing research in natural language processing (NLP) and computer vision (CV)
SuperB is a leaderboard to benchmark the performance of a shared model across a wide range of speech processing tasks.
We present a simple framework to solve SUPERB tasks by learning task-specialized lightweight prediction heads on top of the frozen shared model.
arXiv Detail & Related papers (2021-05-03T17:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.