Unsupervised Noise adaptation using Data Simulation
- URL: http://arxiv.org/abs/2302.11981v1
- Date: Thu, 23 Feb 2023 12:57:20 GMT
- Title: Unsupervised Noise adaptation using Data Simulation
- Authors: Chen Chen, Yuchen Hu, Heqing Zou, Linhui Sun, Eng Siong Chng
- Abstract summary: We propose a generative adversarial network based method to efficiently learn a converse clean-to-noisy transformation.
Experimental results show that our method effectively mitigates the domain mismatch between training and test sets.
- Score: 21.866522173387715
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural network based speech enhancement approaches aim to learn a
noisy-to-clean transformation using a supervised learning paradigm. However,
such a trained-well transformation is vulnerable to unseen noises that are not
included in training set. In this work, we focus on the unsupervised noise
adaptation problem in speech enhancement, where the ground truth of target
domain data is completely unavailable. Specifically, we propose a generative
adversarial network based method to efficiently learn a converse clean-to-noisy
transformation using a few minutes of unpaired target domain data. Then this
transformation is utilized to generate sufficient simulated data for domain
adaptation of the enhancement model. Experimental results show that our method
effectively mitigates the domain mismatch between training and test sets, and
surpasses the best baseline by a large margin.
Related papers
- Effective Noise-aware Data Simulation for Domain-adaptive Speech Enhancement Leveraging Dynamic Stochastic Perturbation [25.410770364140856]
Cross-domain speech enhancement (SE) is often faced with severe challenges due to the scarcity of noise and background information in an unseen target domain.
This study puts forward a novel data simulation method to address this issue, leveraging noise-extractive techniques and generative adversarial networks (GANs)
We introduce the notion of dynamic perturbation, which can inject controlled perturbations into the noise embeddings during inference.
arXiv Detail & Related papers (2024-09-03T02:29:01Z) - How to Learn in a Noisy World? Self-Correcting the Real-World Data Noise on Machine Translation [10.739338438716965]
We study the impact of real-world hard-to-detect misalignment noise on machine translation.
By observing the increasing reliability of the model's self-knowledge for distinguishing misaligned and clean data at the token-level, we propose a self-correction approach.
Our method proves effective for real-world noisy web-mined datasets across eight translation tasks.
arXiv Detail & Related papers (2024-07-02T12:15:15Z) - Denoising as Adaptation: Noise-Space Domain Adaptation for Image Restoration [64.84134880709625]
We show that it is possible to perform domain adaptation via the noise space using diffusion models.
In particular, by leveraging the unique property of how auxiliary conditional inputs influence the multi-step denoising process, we derive a meaningful diffusion loss.
We present crucial strategies such as channel-shuffling layer and residual-swapping contrastive learning in the diffusion model.
arXiv Detail & Related papers (2024-06-26T17:40:30Z) - Pivotal Auto-Encoder via Self-Normalizing ReLU [20.76999663290342]
We formalize single hidden layer sparse auto-encoders as a transform learning problem.
We propose an optimization problem that leads to a predictive model invariant to the noise level at test time.
Our experimental results demonstrate that the trained models yield a significant improvement in stability against varying types of noise.
arXiv Detail & Related papers (2024-06-23T09:06:52Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - From Denoising Training to Test-Time Adaptation: Enhancing Domain
Generalization for Medical Image Segmentation [8.36463803956324]
We propose the Denoising Y-Net (DeY-Net), a novel approach incorporating an auxiliary denoising decoder into the basic U-Net architecture.
The auxiliary decoder aims to perform denoising training, augmenting the domain-invariant representation that facilitates domain generalization.
Building upon denoising training, we propose Denoising Test Time Adaptation (DeTTA) that further: (i) adapts the model to the target domain in a sample-wise manner, and (ii) adapts to the noise-corrupted input.
arXiv Detail & Related papers (2023-10-31T08:39:15Z) - Understanding and Mitigating the Label Noise in Pre-training on
Downstream Tasks [91.15120211190519]
This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks.
We propose a light-weight black-box tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise.
arXiv Detail & Related papers (2023-09-29T06:18:15Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - Improving Pre-trained Language Model Fine-tuning with Noise Stability
Regularization [94.4409074435894]
We propose a novel and effective fine-tuning framework, named Layerwise Noise Stability Regularization (LNSR)
Specifically, we propose to inject the standard Gaussian noise and regularize hidden representations of the fine-tuned model.
We demonstrate the advantages of the proposed method over other state-of-the-art algorithms including L2-SP, Mixout and SMART.
arXiv Detail & Related papers (2022-06-12T04:42:49Z) - Improving Distortion Robustness of Self-supervised Speech Processing
Tasks with Domain Adaptation [60.26511271597065]
Speech distortions are a long-standing problem that degrades the performance of supervisely trained speech processing models.
It is high time that we enhance the robustness of speech processing models to obtain good performance when encountering speech distortions.
arXiv Detail & Related papers (2022-03-30T07:25:52Z) - SemI2I: Semantically Consistent Image-to-Image Translation for Domain
Adaptation of Remote Sensing Data [7.577893526158495]
We propose a new data augmentation approach that transfers the style of test data to training data using generative adversarial networks.
Our semantic segmentation framework consists in first training a U-net from the real training data and then fine-tuning it on the test stylized fake training data generated by the proposed approach.
arXiv Detail & Related papers (2020-02-14T09:07:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.