Calibrated Adaptive Teacher for Domain Adaptive Intelligent Fault
Diagnosis
- URL: http://arxiv.org/abs/2312.02826v1
- Date: Tue, 5 Dec 2023 15:19:29 GMT
- Title: Calibrated Adaptive Teacher for Domain Adaptive Intelligent Fault
Diagnosis
- Authors: Florent Forest, Olga Fink
- Abstract summary: Unsupervised domain adaptation (UDA) deals with the scenario where labeled data are available in a source domain, and only unlabeled data are available in a target domain.
We propose a novel UDA method called Calibrated Adaptive Teacher (CAT), where we propose to calibrate the predictions of the teacher network throughout the self-training process.
- Score: 7.88657961743755
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intelligent Fault Diagnosis (IFD) based on deep learning has proven to be an
effective and flexible solution, attracting extensive research. Deep neural
networks can learn rich representations from vast amounts of representative
labeled data for various applications. In IFD, they achieve high classification
performance from signals in an end-to-end manner, without requiring extensive
domain knowledge. However, deep learning models usually only perform well on
the data distribution they have been trained on. When applied to a different
distribution, they may experience performance drops. This is also observed in
IFD, where assets are often operated in working conditions different from those
in which labeled data have been collected. Unsupervised domain adaptation (UDA)
deals with the scenario where labeled data are available in a source domain,
and only unlabeled data are available in a target domain, where domains may
correspond to operating conditions. Recent methods rely on training with
confident pseudo-labels for target samples. However, the confidence-based
selection of pseudo-labels is hindered by poorly calibrated confidence
estimates in the target domain, primarily due to over-confident predictions,
which limits the quality of pseudo-labels and leads to error accumulation. In
this paper, we propose a novel UDA method called Calibrated Adaptive Teacher
(CAT), where we propose to calibrate the predictions of the teacher network
throughout the self-training process, leveraging post-hoc calibration
techniques. We evaluate CAT on domain-adaptive IFD and perform extensive
experiments on the Paderborn benchmark for bearing fault diagnosis under
varying operating conditions. Our proposed method achieves state-of-the-art
performance on most transfer tasks.
Related papers
- Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - Contrastive Domain Adaptation for Early Misinformation Detection: A Case
Study on COVID-19 [8.828396559882954]
Early misinformation often demonstrates both conditional and label shifts against existing misinformation data.
We propose contrastive adaptation network for early misinformation detection (CANMD)
Results suggest CANMD can effectively adapt misinformation detection systems to the unseen COVID-19 target domain.
arXiv Detail & Related papers (2022-08-20T02:09:35Z) - Deep Unsupervised Domain Adaptation: A Review of Recent Advances and
Perspectives [16.68091981866261]
Unsupervised domain adaptation (UDA) is proposed to counter the performance drop on data in a target domain.
UDA has yielded promising results on natural image processing, video analysis, natural language processing, time-series data analysis, medical image analysis, etc.
arXiv Detail & Related papers (2022-08-15T20:05:07Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Invariance Learning based on Label Hierarchy [17.53032543377636]
Deep Neural Networks inherit spurious correlations embedded in training data and hence fail to predict desired labels on unseen domains.
Invariance Learning (IL) has been developed recently to overcome this shortcoming.
We propose a novel IL framework to overcome the requirement of training data in multiple domains.
arXiv Detail & Related papers (2022-03-29T13:31:21Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Weak Adaptation Learning -- Addressing Cross-domain Data Insufficiency
with Weak Annotator [2.8672054847109134]
In some target problem domains, there are not many data samples available, which could hinder the learning process.
We propose a weak adaptation learning (WAL) approach that leverages unlabeled data from a similar source domain.
Our experiments demonstrate the effectiveness of our approach in learning an accurate classifier with limited labeled data in the target domain.
arXiv Detail & Related papers (2021-02-15T06:19:25Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z) - Unsupervised Domain Adaptation for Speech Recognition via Uncertainty
Driven Self-Training [55.824641135682725]
Domain adaptation experiments using WSJ as a source domain and TED-LIUM 3 as well as SWITCHBOARD show that up to 80% of the performance of a system trained on ground-truth data can be recovered.
arXiv Detail & Related papers (2020-11-26T18:51:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.