Controlled Generation of Unseen Faults for Partial and OpenSet&Partial
Domain Adaptation
- URL: http://arxiv.org/abs/2204.14068v1
- Date: Fri, 29 Apr 2022 13:05:25 GMT
- Title: Controlled Generation of Unseen Faults for Partial and OpenSet&Partial
Domain Adaptation
- Authors: Katharina Rombach, Dr. Gabriel Michau and Prof. Dr. Olga Fink
- Abstract summary: New operating conditions can result in a performance drop of fault diagnostics models due to the domain gap between the training and the testing data distributions.
We propose a new framework based on a Wasserstein GAN for Partial and OpenSet&Partial domain adaptation.
The main contribution is the controlled fault data generation that enables to generate unobserved fault types and severity levels in the target domain.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: New operating conditions can result in a performance drop of fault
diagnostics models due to the domain gap between the training and the testing
data distributions. While several domain adaptation approaches have been
proposed to overcome such domain shifts, their application is limited if the
label spaces of the two domains are not congruent. To improve the
transferability of the trained models, particularly in setups where only the
healthy data class is shared between the two domains, we propose a new
framework based on a Wasserstein GAN for Partial and OpenSet&Partial domain
adaptation. The main contribution is the controlled fault data generation that
enables to generate unobserved fault types and severity levels in the target
domain by having only access to the healthy samples in the target domain and
faulty samples in the source domain. To evaluate the ability of the proposed
method to bridge domain gaps in different domain adaption settings, we conduct
Partial as well as OpenSet&Partial domain adaptation experiments on two bearing
fault diagnostics case studies. The results show the versatility of the
framework and that the synthetically generated fault data helps bridging the
domain gaps, especially in instances where the domain gap is large.
Related papers
- Partial Identifiability for Domain Adaptation [17.347755928718872]
We propose a practical domain adaptation framework called iMSDA.
We show that iMSDA outperforms state-of-the-art domain adaptation algorithms on benchmark datasets.
arXiv Detail & Related papers (2023-06-10T19:04:03Z) - Adaptive Domain Generalization via Online Disagreement Minimization [17.215683606365445]
Domain Generalization aims to safely transfer a model to unseen target domains.
AdaODM adaptively modifies the source model at test time for different target domains.
Results show AdaODM stably improves the generalization capacity on unseen domains.
arXiv Detail & Related papers (2022-08-03T11:51:11Z) - Making the Best of Both Worlds: A Domain-Oriented Transformer for
Unsupervised Domain Adaptation [31.150256154504696]
Unsupervised Domain Adaptation (UDA) has propelled the deployment of deep learning from limited experimental datasets into real-world unconstrained domains.
Most UDA approaches align features within a common embedding space and apply a shared classifier for target prediction.
We propose to simultaneously conduct feature alignment in two individual spaces focusing on different domains, and create for each space a domain-oriented classifier.
arXiv Detail & Related papers (2022-08-02T01:38:37Z) - From Big to Small: Adaptive Learning to Partial-Set Domains [94.92635970450578]
Domain adaptation targets at knowledge acquisition and dissemination from a labeled source domain to an unlabeled target domain under distribution shift.
Recent advances show that deep pre-trained models of large scale endow rich knowledge to tackle diverse downstream tasks of small scale.
This paper introduces Partial Domain Adaptation (PDA), a learning paradigm that relaxes the identical class space assumption to that the source class space subsumes the target class space.
arXiv Detail & Related papers (2022-03-14T07:02:45Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Domain Adaptation with Incomplete Target Domains [61.68950959231601]
We propose an Incomplete Data Imputation based Adversarial Network (IDIAN) model to address this new domain adaptation challenge.
In the proposed model, we design a data imputation module to fill the missing feature values based on the partial observations in the target domain.
We conduct experiments on both cross-domain benchmark tasks and a real world adaptation task with imperfect target domains.
arXiv Detail & Related papers (2020-12-03T00:07:40Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Discrepancy Minimization in Domain Generalization with Generative
Nearest Neighbors [13.047289562445242]
Domain generalization (DG) deals with the problem of domain shift where a machine learning model trained on multiple-source domains fail to generalize well on a target domain with different statistics.
Multiple approaches have been proposed to solve the problem of domain generalization by learning domain invariant representations across the source domains that fail to guarantee generalization on the shifted target domain.
We propose a Generative Nearest Neighbor based Discrepancy Minimization (GNNDM) method which provides a theoretical guarantee that is upper bounded by the error in the labeling process of the target.
arXiv Detail & Related papers (2020-07-28T14:54:25Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.