Domain Generalization in Biosignal Classification
- URL: http://arxiv.org/abs/2011.06207v1
- Date: Thu, 12 Nov 2020 05:15:46 GMT
- Title: Domain Generalization in Biosignal Classification
- Authors: Theekshana Dissanayake, Tharindu Fernando, Simon Denman, Houman
Ghaemmaghami, Sridha Sridharan, Clinton Fookes
- Abstract summary: This study is the first to investigate domain generalization for biosignal data.
Our proposed method achieves accuracy gains of up to 16% for four completely unseen domains.
- Score: 37.70077538403524
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Objective: When training machine learning models, we often assume that the
training data and evaluation data are sampled from the same distribution.
However, this assumption is violated when the model is evaluated on another
unseen but similar database, even if that database contains the same classes.
This problem is caused by domain-shift and can be solved using two approaches:
domain adaptation and domain generalization. Simply, domain adaptation methods
can access data from unseen domains during training; whereas in domain
generalization, the unseen data is not available during training. Hence, domain
generalization concerns models that perform well on inaccessible,
domain-shifted data. Method: Our proposed domain generalization method
represents an unseen domain using a set of known basis domains, afterwhich we
classify the unseen domain using classifier fusion. To demonstrate our system,
we employ a collection of heart sound databases that contain normal and
abnormal sounds (classes). Results: Our proposed classifier fusion method
achieves accuracy gains of up to 16% for four completely unseen domains.
Conclusion: Recognizing the complexity induced by the inherent temporal nature
of biosignal data, the two-stage method proposed in this study is able to
effectively simplify the whole process of domain generalization while
demonstrating good results on unseen domains and the adopted basis domains.
Significance: To our best knowledge, this is the first study that investigates
domain generalization for biosignal data. Our proposed learning strategy can be
used to effectively learn domain-relevant features while being aware of the
class differences in the data.
Related papers
- Improving Domain Generalization with Domain Relations [77.63345406973097]
This paper focuses on domain shifts, which occur when the model is applied to new domains that are different from the ones it was trained on.
We propose a new approach called D$3$G to learn domain-specific models.
Our results show that D$3$G consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-06T08:11:16Z) - Domain Adaptation Principal Component Analysis: base linear method for
learning with out-of-distribution data [55.41644538483948]
Domain adaptation is a popular paradigm in modern machine learning.
We present a method called Domain Adaptation Principal Component Analysis (DAPCA)
DAPCA finds a linear reduced data representation useful for solving the domain adaptation task.
arXiv Detail & Related papers (2022-08-28T21:10:56Z) - Domain Generalization via Selective Consistency Regularization for Time
Series Classification [16.338176636365752]
Domain generalization methods aim to learn models robust to domain shift with data from a limited number of source domains.
We propose a novel representation learning methodology that selectively enforces prediction consistency between source domains.
arXiv Detail & Related papers (2022-06-16T01:57:35Z) - Adaptive Methods for Aggregated Domain Generalization [26.215904177457997]
In many settings, privacy concerns prohibit obtaining domain labels for the training data samples.
We propose a domain-adaptive approach to this problem, which operates in two steps.
Our approach achieves state-of-the-art performance on a variety of domain generalization benchmarks without using domain labels.
arXiv Detail & Related papers (2021-12-09T08:57:01Z) - Towards Data-Free Domain Generalization [12.269045654957765]
How can knowledge contained in models trained on different source data domains be merged into a single model that generalizes well to unseen target domains?
Prior domain generalization methods typically rely on using source domain data, making them unsuitable for private decentralized data.
We propose DEKAN, an approach that extracts and fuses domain-specific knowledge from the available teacher models into a student model robust to domain shift.
arXiv Detail & Related papers (2021-10-09T11:44:05Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Adaptive Methods for Real-World Domain Generalization [32.030688845421594]
In our work, we investigate whether it is possible to leverage domain information from unseen test samples themselves.
We propose a domain-adaptive approach consisting of two steps: a) we first learn a discriminative domain embedding from unsupervised training examples, and b) use this domain embedding as supplementary information to build a domain-adaptive model.
Our approach achieves state-of-the-art performance on various domain generalization benchmarks.
arXiv Detail & Related papers (2021-03-29T17:44:35Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Batch Normalization Embeddings for Deep Domain Generalization [50.51405390150066]
Domain generalization aims at training machine learning models to perform robustly across different and unseen domains.
We show a significant increase in classification accuracy over current state-of-the-art techniques on popular domain generalization benchmarks.
arXiv Detail & Related papers (2020-11-25T12:02:57Z) - Adaptive Risk Minimization: Learning to Adapt to Domain Shift [109.87561509436016]
A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.
In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts.
We introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains.
arXiv Detail & Related papers (2020-07-06T17:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.