DANNTe: a case study of a turbo-machinery sensor virtualization under
domain shift
- URL: http://arxiv.org/abs/2201.03850v1
- Date: Tue, 11 Jan 2022 09:24:33 GMT
- Title: DANNTe: a case study of a turbo-machinery sensor virtualization under
domain shift
- Authors: Luca Strazzera and Valentina Gori and Giacomo Veneri
- Abstract summary: We propose an adversarial learning method to tackle a Domain Adaptation (DA) time series regression task (DANNTe)
The regression aims at building a virtual copy of a sensor installed on a gas turbine, to be used in place of the physical sensor which can be missing in certain situations.
We report a significant improvement in regression performance, compared to the baseline model trained on the source domain only.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an adversarial learning method to tackle a Domain Adaptation (DA)
time series regression task (DANNTe). The regression aims at building a virtual
copy of a sensor installed on a gas turbine, to be used in place of the
physical sensor which can be missing in certain situations. Our DA approach is
to search for a domain-invariant representation of the features. The learner
has access to both a labelled source dataset and an unlabeled target dataset
(unsupervised DA) and is trained on both, exploiting the minmax game between a
task regressor and a domain classifier Neural Networks. Both models share the
same feature representation, learnt by a feature extractor. This work is based
on the results published by Ganin et al. arXiv:1505.07818; indeed, we present
an extension suitable to time series applications. We report a significant
improvement in regression performance, compared to the baseline model trained
on the source domain only.
Related papers
- Learn from the Learnt: Source-Free Active Domain Adaptation via Contrastive Sampling and Visual Persistence [60.37934652213881]
Domain Adaptation (DA) facilitates knowledge transfer from a source domain to a related target domain.
This paper investigates a practical DA paradigm, namely Source data-Free Active Domain Adaptation (SFADA), where source data becomes inaccessible during adaptation.
We present learn from the learnt (LFTL), a novel paradigm for SFADA to leverage the learnt knowledge from the source pretrained model and actively iterated models without extra overhead.
arXiv Detail & Related papers (2024-07-26T17:51:58Z) - First-Order Manifold Data Augmentation for Regression Learning [4.910937238451485]
We introduce FOMA: a new data-driven domain-independent data augmentation method.
We evaluate FOMA on in-distribution generalization and out-of-distribution benchmarks, and we show that it improves the generalization of several neural architectures.
arXiv Detail & Related papers (2024-06-16T12:35:05Z) - SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - Cross-Domain Video Anomaly Detection without Target Domain Adaptation [38.823721272155616]
Video Anomaly Detection (VAD) works assume that at least few task-relevant target domain training data are available for adaptation from the source to the target domain.
This requires laborious model-tuning by the end-user who may prefer to have a system that works out-of-the-box"
arXiv Detail & Related papers (2022-12-14T03:48:00Z) - Adapting the Mean Teacher for keypoint-based lung registration under
geometric domain shifts [75.51482952586773]
deep neural networks generally require plenty of labeled training data and are vulnerable to domain shifts between training and test data.
We present a novel approach to geometric domain adaptation for image registration, adapting a model from a labeled source to an unlabeled target domain.
Our method consistently improves on the baseline model by 50%/47% while even matching the accuracy of models trained on target data.
arXiv Detail & Related papers (2022-07-01T12:16:42Z) - Domain Adaptation for Time-Series Classification to Mitigate Covariate
Shift [3.071136270246468]
This paper proposes a novel supervised domain adaptation based on two steps.
First, we search for an optimal class-dependent transformation from the source to the target domain from a few samples.
Second, we use embedding similarity techniques to select the corresponding transformation at inference.
arXiv Detail & Related papers (2022-04-07T10:27:14Z) - Instance Relation Graph Guided Source-Free Domain Adaptive Object
Detection [79.89082006155135]
Unsupervised Domain Adaptation (UDA) is an effective approach to tackle the issue of domain shift.
UDA methods try to align the source and target representations to improve the generalization on the target domain.
The Source-Free Adaptation Domain (SFDA) setting aims to alleviate these concerns by adapting a source-trained model for the target domain without requiring access to the source data.
arXiv Detail & Related papers (2022-03-29T17:50:43Z) - TridentAdapt: Learning Domain-invariance via Source-Target Confrontation
and Self-induced Cross-domain Augmentation [0.0]
Key challenge is to learn domain-agnostic representation of the inputs in order to benefit from virtual data.
We propose a novel trident-like architecture that enforces a shared feature encoder to satisfy confrontational source and target constraints simultaneously.
We also introduce a novel training pipeline enabling self-induced cross-domain data augmentation during the forward pass.
arXiv Detail & Related papers (2021-11-30T11:25:46Z) - Learning Domain-invariant Graph for Adaptive Semi-supervised Domain
Adaptation with Few Labeled Source Samples [65.55521019202557]
Domain adaptation aims to generalize a model from a source domain to tackle tasks in a related but different target domain.
Traditional domain adaptation algorithms assume that enough labeled data, which are treated as the prior knowledge are available in the source domain.
We propose a Domain-invariant Graph Learning (DGL) approach for domain adaptation with only a few labeled source samples.
arXiv Detail & Related papers (2020-08-21T08:13:25Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.