LAMA-Net: Unsupervised Domain Adaptation via Latent Alignment and
Manifold Learning for RUL Prediction
- URL: http://arxiv.org/abs/2208.08388v1
- Date: Wed, 17 Aug 2022 16:28:20 GMT
- Title: LAMA-Net: Unsupervised Domain Adaptation via Latent Alignment and
Manifold Learning for RUL Prediction
- Authors: Manu Joseph, Varchita Lalwani
- Abstract summary: We propose textitLAMA-Net, an encoder-decoder based model (Transformer) with an induced bottleneck, Latent Alignment using Mean Maximum Discrepancy (MMD) and manifold learning.
The proposed method offers a promising approach to perform domain adaptation in RUL prediction.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prognostics and Health Management (PHM) is an emerging field which has
received much attention from the manufacturing industry because of the benefits
and efficiencies it brings to the table. And Remaining Useful Life (RUL)
prediction is at the heart of any PHM system. Most recent data-driven research
demand substantial volumes of labelled training data before a performant model
can be trained under the supervised learning paradigm. This is where Transfer
Learning (TL) and Domain Adaptation (DA) methods step in and make it possible
for us to generalize a supervised model to other domains with different data
distributions with no labelled data. In this paper, we propose
\textit{LAMA-Net}, an encoder-decoder based model (Transformer) with an induced
bottleneck, Latent Alignment using Maximum Mean Discrepancy (MMD) and manifold
learning is proposed to tackle the problem of Unsupervised Homogeneous Domain
Adaptation for RUL prediction. \textit{LAMA-Net} is validated using the C-MAPSS
Turbofan Engine dataset by NASA and compared against other state-of-the-art
techniques for DA. The results suggest that the proposed method offers a
promising approach to perform domain adaptation in RUL prediction. Code will be
made available once the paper comes out of review.
Related papers
- Unsupervised Domain Adaptation Via Data Pruning [0.0]
We consider the problem from the perspective of unsupervised domain adaptation (UDA)
We propose AdaPrune, a method for UDA whereby training examples are removed to attempt to align the training distribution to that of the target data.
As a method for UDA, we show that AdaPrune outperforms related techniques, and is complementary to other UDA algorithms such as CORAL.
arXiv Detail & Related papers (2024-09-18T15:48:59Z) - Source-Free Unsupervised Domain Adaptation with Hypothesis Consolidation
of Prediction Rationale [53.152460508207184]
Source-Free Unsupervised Domain Adaptation (SFUDA) is a challenging task where a model needs to be adapted to a new domain without access to target domain labels or source domain data.
This paper proposes a novel approach that considers multiple prediction hypotheses for each sample and investigates the rationale behind each hypothesis.
To achieve the optimal performance, we propose a three-step adaptation process: model pre-adaptation, hypothesis consolidation, and semi-supervised learning.
arXiv Detail & Related papers (2024-02-02T05:53:22Z) - Domain adaption and physical constrains transfer learning for shale gas
production [0.26440512250125126]
We propose a novel transfer learning methodology that utilizes domain adaptation and physical constraints.
This methodology effectively employs historical data from the source domain to reduce negative transfer from the data distribution perspective.
By incorporating drilling, completion, and geological data as physical constraints, we develop a hybrid model.
arXiv Detail & Related papers (2023-12-18T04:13:27Z) - Learning Transferable Conceptual Prototypes for Interpretable
Unsupervised Domain Adaptation [79.22678026708134]
In this paper, we propose an inherently interpretable method, named Transferable Prototype Learning ( TCPL)
To achieve this goal, we design a hierarchically prototypical module that transfers categorical basic concepts from the source domain to the target domain and learns domain-shared prototypes for explaining the underlying reasoning process.
Comprehensive experiments show that the proposed method can not only provide effective and intuitive explanations but also outperform previous state-of-the-arts.
arXiv Detail & Related papers (2023-10-12T06:36:41Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - Domain Adaptation via Alignment of Operation Profile for Remaining
Useful Lifetime Prediction [8.715570103753697]
This paper proposes two novel DA approaches for RUL prediction based on an adversarial domain adaptation framework.
The proposed methodologies align the marginal distributions of each phase of the operation profile in the source domain with its counterpart in the target domain.
Results show that the proposed methods improve the accuracy of RUL predictions compared to current state-of-the-art DA methods.
arXiv Detail & Related papers (2023-02-03T13:02:27Z) - Learning Feature Decomposition for Domain Adaptive Monocular Depth
Estimation [51.15061013818216]
Supervised approaches have led to great success with the advance of deep learning, but they rely on large quantities of ground-truth depth annotations.
Unsupervised domain adaptation (UDA) transfers knowledge from labeled source data to unlabeled target data, so as to relax the constraint of supervised learning.
We propose a novel UDA method for MDE, referred to as Learning Feature Decomposition for Adaptation (LFDA), which learns to decompose the feature space into content and style components.
arXiv Detail & Related papers (2022-07-30T08:05:35Z) - DaLC: Domain Adaptation Learning Curve Prediction for Neural Machine
Translation [10.03007605098947]
Domain Adaptation (DA) of Neural Machine Translation (NMT) model often relies on a pre-trained general NMT model which is adapted to the new domain on a sample of in-domain parallel data.
We propose a Domain Learning Curve prediction (DaLC) model that predicts prospective DA performance based on in-domain monolingual samples in the source language.
arXiv Detail & Related papers (2022-04-20T06:57:48Z) - UDALM: Unsupervised Domain Adaptation through Language Modeling [79.73916345178415]
We introduce UDALM, a fine-tuning procedure, using a mixed classification and Masked Language Model loss.
Our experiments show that performance of models trained with the mixed loss scales with the amount of available target data can be effectively used as a stopping criterion.
Our method is evaluated on twelve domain pairs of the Amazon Reviews Sentiment dataset, yielding $91.74%$ accuracy, which is an $1.11%$ absolute improvement over the state-of-versathe-art.
arXiv Detail & Related papers (2021-04-14T19:05:01Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.