EverAdapt: Continuous Adaptation for Dynamic Machine Fault Diagnosis Environments
- URL: http://arxiv.org/abs/2407.17117v1
- Date: Wed, 24 Jul 2024 09:25:54 GMT
- Title: EverAdapt: Continuous Adaptation for Dynamic Machine Fault Diagnosis Environments
- Authors: Edward, Mohamed Ragab, Yuecong Xu, Min Wu, Yuecong Xu, Zhenghua Chen, Abdulla Alseiari, Xiaoli Li,
- Abstract summary: Unsupervised Domain Adaptation (UDA) has emerged as a key solution in data-driven fault diagnosis.
UDA tends to underperform on previously seen domains when adapting to new ones.
We introduce the EverAdapt framework, specifically designed for continuous model adaptation in dynamic environments.
- Score: 23.190374196679766
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Unsupervised Domain Adaptation (UDA) has emerged as a key solution in data-driven fault diagnosis, addressing domain shift where models underperform in changing environments. However, under the realm of continually changing environments, UDA tends to underperform on previously seen domains when adapting to new ones - a problem known as catastrophic forgetting. To address this limitation, we introduce the EverAdapt framework, specifically designed for continuous model adaptation in dynamic environments. Central to EverAdapt is a novel Continual Batch Normalization (CBN), which leverages source domain statistics as a reference point to standardize feature representations across domains. EverAdapt not only retains statistical information from previous domains but also adapts effectively to new scenarios. Complementing CBN, we design a class-conditional domain alignment module for effective integration of target domains, and a Sample-efficient Replay strategy to reinforce memory retention. Experiments on real-world datasets demonstrate EverAdapt superiority in maintaining robust fault diagnosis in dynamic environments. Our code is available: https://github.com/mohamedr002/EverAdapt
Related papers
- Dynamic Domains, Dynamic Solutions: DPCore for Continual Test-Time Adaptation [8.425690424016986]
Continual Test-Time Adaptation (TTA) seeks to adapt a source pre-trained model to continually changing, unlabeled target domains.
Inspired by the principles of online K-Means, this paper introduces a novel approach to continual TTA through visual prompting.
arXiv Detail & Related papers (2024-06-15T20:47:38Z) - Progressive Conservative Adaptation for Evolving Target Domains [76.9274842289221]
Conventional domain adaptation typically transfers knowledge from a source domain to a stationary target domain.
Restoring and adapting to such target data results in escalating computational and resource consumption over time.
We propose a simple yet effective approach, termed progressive conservative adaptation (PCAda)
arXiv Detail & Related papers (2024-02-07T04:11:25Z) - ViDA: Homeostatic Visual Domain Adapter for Continual Test Time Adaptation [48.039156140237615]
A Continual Test-Time Adaptation task is proposed to adapt the pre-trained model to continually changing target domains.
We design a Visual Domain Adapter (ViDA) for CTTA, explicitly handling both domain-specific and domain-shared knowledge.
Our proposed method achieves state-of-the-art performance in both classification and segmentation CTTA tasks.
arXiv Detail & Related papers (2023-06-07T11:18:53Z) - PointFix: Learning to Fix Domain Bias for Robust Online Stereo
Adaptation [67.41325356479229]
We propose to incorporate an auxiliary point-selective network into a meta-learning framework, called PointFix.
In a nutshell, our auxiliary network learns to fix local variants intensively by effectively back-propagating local information through the meta-gradient.
This network is model-agnostic, so can be used in any kind of architectures in a plug-and-play manner.
arXiv Detail & Related papers (2022-07-27T07:48:29Z) - Feed-Forward Latent Domain Adaptation [17.71179872529747]
We study a new highly-practical problem setting that enables resource-constrained edge devices to adapt a pre-trained model to their local data distributions.
Considering limitations of edge devices, we aim to only use a pre-trained model and adapt it in a feed-forward way, without using back-propagation and without access to the source data.
Our solution is to meta-learn a network capable of embedding the mixed-relevance target dataset and dynamically adapting inference for target examples using cross-attention.
arXiv Detail & Related papers (2022-07-15T17:37:42Z) - Towards Online Domain Adaptive Object Detection [79.89082006155135]
Existing object detection models assume both the training and test data are sampled from the same source domain.
We propose a novel unified adaptation framework that adapts and improves generalization on the target domain in online settings.
arXiv Detail & Related papers (2022-04-11T17:47:22Z) - Target and Task specific Source-Free Domain Adaptive Image Segmentation [73.78898054277538]
We propose a two-stage approach for source-free domain adaptive image segmentation.
We focus on generating target-specific pseudo labels while suppressing high entropy regions.
In the second stage, we focus on adapting the network for task-specific representation.
arXiv Detail & Related papers (2022-03-29T17:50:22Z) - Continual Test-Time Domain Adaptation [94.51284735268597]
Test-time domain adaptation aims to adapt a source pre-trained model to a target domain without using any source data.
CoTTA is easy to implement and can be readily incorporated in off-the-shelf pre-trained models.
arXiv Detail & Related papers (2022-03-25T11:42:02Z) - The Norm Must Go On: Dynamic Unsupervised Domain Adaptation by
Normalization [10.274423413222763]
Domain adaptation is crucial to adapt a learned model to new scenarios, such as domain shifts or changing data distributions.
Current approaches usually require a large amount of labeled or unlabeled data from the shifted domain.
We propose Dynamic Unsupervised Adaptation (DUA) to overcome this problem.
arXiv Detail & Related papers (2021-12-01T12:43:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.