Learning Repeatable Speech Embeddings Using An Intra-class Correlation
Regularizer
- URL: http://arxiv.org/abs/2310.17049v1
- Date: Wed, 25 Oct 2023 23:21:46 GMT
- Title: Learning Repeatable Speech Embeddings Using An Intra-class Correlation
Regularizer
- Authors: Jianwei Zhang, Suren Jayasuriya, Visar Berisha
- Abstract summary: We evaluate the repeatability of embeddings using the intra-class correlation coefficient (ICC)
We propose a novel regularizer, the ICC regularizer, as a complementary component for contrastive losses to guide deep neural networks to produce embeddings with higher repeatability.
We implement the ICC regularizer and apply it to three speech tasks: speaker verification, voice style conversion, and a clinical application for detecting dysphonic voice.
- Score: 16.716653844774374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A good supervised embedding for a specific machine learning task is only
sensitive to changes in the label of interest and is invariant to other
confounding factors. We leverage the concept of repeatability from measurement
theory to describe this property and propose to use the intra-class correlation
coefficient (ICC) to evaluate the repeatability of embeddings. We then propose
a novel regularizer, the ICC regularizer, as a complementary component for
contrastive losses to guide deep neural networks to produce embeddings with
higher repeatability. We use simulated data to explain why the ICC regularizer
works better on minimizing the intra-class variance than the contrastive loss
alone. We implement the ICC regularizer and apply it to three speech tasks:
speaker verification, voice style conversion, and a clinical application for
detecting dysphonic voice. The experimental results demonstrate that adding an
ICC regularizer can improve the repeatability of learned embeddings compared to
only using the contrastive loss; further, these embeddings lead to improved
performance in these downstream tasks.
Related papers
- Multi-Granularity Semantic Revision for Large Language Model Distillation [66.03746866578274]
We propose a multi-granularity semantic revision method for LLM distillation.
At the sequence level, we propose a sequence correction and re-generation strategy.
At the token level, we design a distribution adaptive clipping Kullback-Leibler loss as the distillation objective function.
At the span level, we leverage the span priors of a sequence to compute the probability correlations within spans, and constrain the teacher and student's probability correlations to be consistent.
arXiv Detail & Related papers (2024-07-14T03:51:49Z) - On the Condition Monitoring of Bolted Joints through Acoustic Emission and Deep Transfer Learning: Generalization, Ordinal Loss and Super-Convergence [0.12289361708127876]
This paper investigates the use of deep transfer learning based on convolutional neural networks (CNNs) to monitor bolted joints using acoustic emissions.
We evaluate the performance of our methodology using the ORION-AE benchmark, a structure composed of two thin beams connected by three bolts.
arXiv Detail & Related papers (2024-05-29T13:07:21Z) - Coordinated Sparse Recovery of Label Noise [2.9495895055806804]
This study focuses on robust classification tasks where the label noise is instance-dependent.
We propose a method called Coordinated Sparse Recovery (CSR)
CSR introduces a collaboration matrix and confidence weights to coordinate model predictions and noise recovery, reducing error leakage.
Based on CSR, this study designs a joint sample selection strategy and constructs a comprehensive and powerful learning framework called CSR+.
arXiv Detail & Related papers (2024-04-07T03:41:45Z) - Fixed Random Classifier Rearrangement for Continual Learning [0.5439020425819]
In visual classification scenario, neural networks inevitably forget the knowledge of old tasks after learning new ones.
We propose a continual learning algorithm named Fixed Random Rearrangement (FRCR)
arXiv Detail & Related papers (2024-02-23T09:43:58Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Directly Attention Loss Adjusted Prioritized Experience Replay [0.07366405857677226]
Prioritized Replay Experience (PER) enables the model to learn more about relatively important samples by artificially changing their accessed frequencies.
DALAP is proposed, which can directly quantify the changed extent of the shifted distribution through Parallel Self-Attention network.
arXiv Detail & Related papers (2023-11-24T10:14:05Z) - Dynamic Residual Classifier for Class Incremental Learning [4.02487511510606]
With imbalanced sample numbers between old and new classes, the learning can be biased.
Existing CIL methods exploit the longtailed (LT) recognition techniques, e.g., the adjusted losses and the data re-sampling methods.
A novel Dynamic Residual adaptation (DRC) is proposed to handle this challenging scenario.
arXiv Detail & Related papers (2023-08-25T11:07:11Z) - DELTA: Dynamic Embedding Learning with Truncated Conscious Attention for
CTR Prediction [61.68415731896613]
Click-Through Rate (CTR) prediction is a pivotal task in product and content recommendation.
We propose a model that enables Dynamic Embedding Learning with Truncated Conscious Attention for CTR prediction.
arXiv Detail & Related papers (2023-05-03T12:34:45Z) - CCLF: A Contrastive-Curiosity-Driven Learning Framework for
Sample-Efficient Reinforcement Learning [56.20123080771364]
We develop a model-agnostic Contrastive-Curiosity-Driven Learning Framework (CCLF) for reinforcement learning.
CCLF fully exploit sample importance and improve learning efficiency in a self-supervised manner.
We evaluate this approach on the DeepMind Control Suite, Atari, and MiniGrid benchmarks.
arXiv Detail & Related papers (2022-05-02T14:42:05Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Improving Music Performance Assessment with Contrastive Learning [78.8942067357231]
This study investigates contrastive learning as a potential method to improve existing MPA systems.
We introduce a weighted contrastive loss suitable for regression tasks applied to a convolutional neural network.
Our results show that contrastive-based methods are able to match and exceed SoTA performance for MPA regression tasks.
arXiv Detail & Related papers (2021-08-03T19:24:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.