Domain Adaptation Using Class Similarity for Robust Speech Recognition
- URL: http://arxiv.org/abs/2011.02782v1
- Date: Thu, 5 Nov 2020 12:26:43 GMT
- Title: Domain Adaptation Using Class Similarity for Robust Speech Recognition
- Authors: Han Zhu, Jiangjiang Zhao, Yuling Ren, Li Wang, Pengyuan Zhang
- Abstract summary: This paper proposes a novel adaptation method for deep neural network (DNN) acoustic model using class similarity.
Experiments showed that our approach outperforms fine-tuning using one-hot labels on both accent and noise adaptation task.
- Score: 24.951852740214413
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When only limited target domain data is available, domain adaptation could be
used to promote performance of deep neural network (DNN) acoustic model by
leveraging well-trained source model and target domain data. However, suffering
from domain mismatch and data sparsity, domain adaptation is very challenging.
This paper proposes a novel adaptation method for DNN acoustic model using
class similarity. Since the output distribution of DNN model contains the
knowledge of similarity among classes, which is applicable to both source and
target domain, it could be transferred from source to target model for the
performance improvement. In our approach, we first compute the frame level
posterior probabilities of source samples using source model. Then, for each
class, probabilities of this class are used to compute a mean vector, which we
refer to as mean soft labels. During adaptation, these mean soft labels are
used in a regularization term to train the target model. Experiments showed
that our approach outperforms fine-tuning using one-hot labels on both accent
and noise adaptation task, especially when source and target domain are highly
mismatched.
Related papers
- De-Confusing Pseudo-Labels in Source-Free Domain Adaptation [14.954662088592762]
Source-free domain adaptation aims to adapt a source-trained model to an unlabeled target domain without access to the source data.
We introduce a novel noise-learning approach tailored to address noise distribution in domain adaptation settings.
arXiv Detail & Related papers (2024-01-03T10:07:11Z) - Robust Target Training for Multi-Source Domain Adaptation [110.77704026569499]
We propose a novel Bi-level Optimization based Robust Target Training (BORT$2$) method for MSDA.
Our proposed method achieves the state of the art performance on three MSDA benchmarks, including the large-scale DomainNet dataset.
arXiv Detail & Related papers (2022-10-04T15:20:01Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z) - Distill and Fine-tune: Effective Adaptation from a Black-box Source
Model [138.12678159620248]
Unsupervised domain adaptation (UDA) aims to transfer knowledge in previous related labeled datasets (source) to a new unlabeled dataset (target)
We propose a novel two-step adaptation framework called Distill and Fine-tune (Dis-tune)
arXiv Detail & Related papers (2021-04-04T05:29:05Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Domain Impression: A Source Data Free Domain Adaptation Method [27.19677042654432]
Unsupervised domain adaptation methods solve the adaptation problem for an unlabeled target set, assuming that the source dataset is available with all labels.
This paper proposes a domain adaptation technique that does not need any source data.
Instead of the source data, we are only provided with a classifier that is trained on the source data.
arXiv Detail & Related papers (2021-02-17T19:50:49Z) - Domain Adaptation without Source Data [20.64875162351594]
We introduce Source data-Free Domain Adaptation (SFDA) to avoid accessing source data that may contain sensitive information.
Our key idea is to leverage a pre-trained model from the source domain and progressively update the target model in a self-learning manner.
Our PrDA outperforms conventional domain adaptation methods on benchmark datasets.
arXiv Detail & Related papers (2020-07-03T07:21:30Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.