Adversarial Domain Adaptation with Paired Examples for Acoustic Scene
Classification on Different Recording Devices
- URL: http://arxiv.org/abs/2110.09598v1
- Date: Mon, 18 Oct 2021 19:34:12 GMT
- Title: Adversarial Domain Adaptation with Paired Examples for Acoustic Scene
Classification on Different Recording Devices
- Authors: Stanis{\l}aw Kacprzak and Konrad Kowalczyk
- Abstract summary: We investigate several adversarial models for domain adaptation (DA) and their effect on the acoustic scene classification task.
The experiments are performed on the DCASE20 challenge task 1A dataset, in which we can leverage the paired examples of data recorded using different devices.
The results indicate that the best performing domain adaptation can be obtained using the cycle GAN, which achieves as much as 66% relative improvement in accuracy for the target domain device.
- Score: 10.447270433913134
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In classification tasks, the classification accuracy diminishes when the data
is gathered in different domains. To address this problem, in this paper, we
investigate several adversarial models for domain adaptation (DA) and their
effect on the acoustic scene classification task. The studied models include
several types of generative adversarial networks (GAN), with different loss
functions, and the so-called cycle GAN which consists of two interconnected GAN
models. The experiments are performed on the DCASE20 challenge task 1A dataset,
in which we can leverage the paired examples of data recorded using different
devices, i.e., the source and target domain recordings. The results of
performed experiments indicate that the best performing domain adaptation can
be obtained using the cycle GAN, which achieves as much as 66% relative
improvement in accuracy for the target domain device, while only 6\% relative
decrease in accuracy on the source domain. In addition, by utilizing the paired
data examples, we are able to improve the overall accuracy over the model
trained using larger unpaired data set, while decreasing the computational cost
of the model training.
Related papers
- NeuroADDA: Active Discriminative Domain Adaptation in Connectomic [3.241925400160274]
We introduce NeuroADDA, a method that combines optimal domain selection with source-free active learning to adapt pretrained backbones to a new dataset.
NeuroADDA consistently outperforms training from scratch across diverse datasets and fine-tuning sample sizes.
arXiv Detail & Related papers (2025-03-08T12:40:30Z) - SIDDA: SInkhorn Dynamic Domain Adaptation for Image Classification with Equivariant Neural Networks [37.69303106863453]
SIDDA is an out-of-the-box DA training algorithm built upon the Sinkhorn divergence.
We find that SIDDA enhances the generalization capabilities of NNs.
We also study the efficacy of SIDDA on ENNs with respect to the varying group orders of the dihedral group $D_N$.
arXiv Detail & Related papers (2025-01-23T19:29:34Z) - SelectiveFinetuning: Enhancing Transfer Learning in Sleep Staging through Selective Domain Alignment [3.5833494449195293]
In practical sleep stage classification, a key challenge is the variability of EEG data across different subjects and environments.
Our method utilizes a pretrained Multi Resolution Convolutional Neural Network (MRCNN) to extract EEG features.
By finetuning the model with selective source data, our SelectiveFinetuning enhances the model's performance on target domain.
arXiv Detail & Related papers (2025-01-07T13:08:54Z) - Progressive Multi-Level Alignments for Semi-Supervised Domain Adaptation SAR Target Recognition Using Simulated Data [3.1951121258423334]
We develop an instance-prototype alignment (AIPA) strategy to push the source domain instances close to the corresponding target prototypes.
We also develop an instance-prototype alignment (AIPA) strategy to push the source domain instances close to the corresponding target prototypes.
arXiv Detail & Related papers (2024-11-07T13:53:13Z) - SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - Explaining Cross-Domain Recognition with Interpretable Deep Classifier [100.63114424262234]
Interpretable Deep (IDC) learns the nearest source samples of a target sample as evidence upon which the classifier makes the decision.
Our IDC leads to a more explainable model with almost no accuracy degradation and effectively calibrates classification for optimum reject options.
arXiv Detail & Related papers (2022-11-15T15:58:56Z) - Domain Adaptation Principal Component Analysis: base linear method for
learning with out-of-distribution data [55.41644538483948]
Domain adaptation is a popular paradigm in modern machine learning.
We present a method called Domain Adaptation Principal Component Analysis (DAPCA)
DAPCA finds a linear reduced data representation useful for solving the domain adaptation task.
arXiv Detail & Related papers (2022-08-28T21:10:56Z) - Unsupervised domain adaptation with non-stochastic missing data [0.6608945629704323]
We consider unsupervised domain adaptation (UDA) for classification problems in the presence of missing data in the unlabelled target domain.
Imputation is performed in a domain-invariant latent space and leverages indirect supervision from a complete source domain.
We show the benefits of jointly performing adaptation, classification and imputation on datasets.
arXiv Detail & Related papers (2021-09-16T06:37:07Z) - UDALM: Unsupervised Domain Adaptation through Language Modeling [79.73916345178415]
We introduce UDALM, a fine-tuning procedure, using a mixed classification and Masked Language Model loss.
Our experiments show that performance of models trained with the mixed loss scales with the amount of available target data can be effectively used as a stopping criterion.
Our method is evaluated on twelve domain pairs of the Amazon Reviews Sentiment dataset, yielding $91.74%$ accuracy, which is an $1.11%$ absolute improvement over the state-of-versathe-art.
arXiv Detail & Related papers (2021-04-14T19:05:01Z) - Selecting Treatment Effects Models for Domain Adaptation Using Causal
Knowledge [82.5462771088607]
We propose a novel model selection metric specifically designed for ITE methods under the unsupervised domain adaptation setting.
In particular, we propose selecting models whose predictions of interventions' effects satisfy known causal structures in the target domain.
arXiv Detail & Related papers (2021-02-11T21:03:14Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.