Self-paced ensemble learning for speech and audio classification
- URL: http://arxiv.org/abs/2103.11988v1
- Date: Mon, 22 Mar 2021 16:34:06 GMT
- Title: Self-paced ensemble learning for speech and audio classification
- Authors: Nicolae-Catalin Ristea, Radu Tudor Ionescu
- Abstract summary: We propose a self-paced ensemble learning scheme in which models learn from each other over several iterations.
During the self-paced learning process, our ensemble also gains knowledge about the target domain.
Our empirical results indicate that SPEL significantly outperforms the baseline ensemble models.
- Score: 19.39192082485334
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Combining multiple machine learning models into an ensemble is known to
provide superior performance levels compared to the individual components
forming the ensemble. This is because models can complement each other in
taking better decisions. Instead of just combining the models, we propose a
self-paced ensemble learning scheme in which models learn from each other over
several iterations. During the self-paced learning process based on
pseudo-labeling, in addition to improving the individual models, our ensemble
also gains knowledge about the target domain. To demonstrate the generality of
our self-paced ensemble learning (SPEL) scheme, we conduct experiments on three
audio tasks. Our empirical results indicate that SPEL significantly outperforms
the baseline ensemble models. We also show that applying self-paced learning on
individual models is less effective, illustrating the idea that models in the
ensemble actually learn from each other.
Related papers
- A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Confidence-based Ensembles of End-to-End Speech Recognition Models [71.65982591023581]
We show that a confidence-based ensemble of 5 monolingual models outperforms a system where model selection is performed via a dedicated language identification block.
We also demonstrate that it is possible to combine base and adapted models to achieve strong results on both original and target data.
arXiv Detail & Related papers (2023-06-27T23:13:43Z) - Multi-Mode Online Knowledge Distillation for Self-Supervised Visual
Representation Learning [13.057037169495594]
We propose a Multi-mode Online Knowledge Distillation method (MOKD) to boost self-supervised visual representation learning.
In MOKD, two different models learn collaboratively in a self-supervised manner.
In addition, MOKD also outperforms existing SSL-KD methods for both the student and teacher models.
arXiv Detail & Related papers (2023-04-13T12:55:53Z) - Ensemble knowledge distillation of self-supervised speech models [84.69577440755457]
Distilled self-supervised models have shown competitive performance and efficiency in recent years.
We performed Ensemble Knowledge Distillation (EKD) on various self-supervised speech models such as HuBERT, RobustHuBERT, and WavLM.
Our method improves the performance of the distilled models on four downstream speech processing tasks.
arXiv Detail & Related papers (2023-02-24T17:15:39Z) - Data-Free Diversity-Based Ensemble Selection For One-Shot Federated
Learning in Machine Learning Model Market [2.9046424358155236]
We present a novel Data-Free Diversity-Based method called DeDES to address the ensemble selection problem for models generated by one-shot federated learning.
Our method can achieve both better performance and higher efficiency over 5 datasets and 4 different model structures.
arXiv Detail & Related papers (2023-02-23T02:36:27Z) - Joint Training of Deep Ensembles Fails Due to Learner Collusion [61.557412796012535]
Ensembles of machine learning models have been well established as a powerful method of improving performance over a single model.
Traditionally, ensembling algorithms train their base learners independently or sequentially with the goal of optimizing their joint performance.
We show that directly minimizing the loss of the ensemble appears to rarely be applied in practice.
arXiv Detail & Related papers (2023-01-26T18:58:07Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - On the Compositional Generalization Gap of In-Context Learning [73.09193595292233]
We look at the gap between the in-distribution (ID) and out-of-distribution (OOD) performance of such models in semantic parsing tasks with in-context learning.
We evaluate four model families, OPT, BLOOM, CodeGen and Codex on three semantic parsing datasets.
arXiv Detail & Related papers (2022-11-15T19:56:37Z) - Distill on the Go: Online knowledge distillation in self-supervised
learning [1.1470070927586016]
Recent works have shown that wider and deeper models benefit more from self-supervised learning than smaller models.
We propose Distill-on-the-Go (DoGo), a self-supervised learning paradigm using single-stage online knowledge distillation.
Our results show significant performance gain in the presence of noisy and limited labels.
arXiv Detail & Related papers (2021-04-20T09:59:23Z) - Ensemble deep learning: A review [0.0]
Ensemble learning combines several individual models to obtain better generalization performance.
Deep ensemble learning models combine the advantages of both the deep learning models as well as the ensemble learning.
This paper reviews the state-of-art deep ensemble models and hence serves as an extensive summary for the researchers.
arXiv Detail & Related papers (2021-04-06T09:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.