Applications of Unsupervised Deep Transfer Learning to Intelligent Fault
Diagnosis: A Survey and Comparative Study
- URL: http://arxiv.org/abs/1912.12528v2
- Date: Sat, 20 Nov 2021 15:04:26 GMT
- Title: Applications of Unsupervised Deep Transfer Learning to Intelligent Fault
Diagnosis: A Survey and Comparative Study
- Authors: Zhibin Zhao, Qiyang Zhang, Xiaolei Yu, Chuang Sun, Shibin Wang,
Ruqiang Yan, Xuefeng Chen
- Abstract summary: We construct a new taxonomy and perform a comprehensive review of UDTL-based IFD according to different tasks.
To emphasize the importance and importance of UDTL-based IFD, the whole test framework will be released to the research community.
- Score: 1.2345552555178128
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent progress on intelligent fault diagnosis (IFD) has greatly depended on
deep representation learning and plenty of labeled data. However, machines
often operate with various working conditions or the target task has different
distributions with the collected data used for training (the domain shift
problem). Besides, the newly collected test data in the target domain are
usually unlabeled, leading to unsupervised deep transfer learning based
(UDTL-based) IFD problem. Although it has achieved huge development, a standard
and open source code framework as well as a comparative study for UDTL-based
IFD are not yet established. In this paper, we construct a new taxonomy and
perform a comprehensive review of UDTL-based IFD according to different tasks.
Comparative analysis of some typical methods and datasets reveals some open and
essential issues in UDTL-based IFD which are rarely studied, including
transferability of features, influence of backbones, negative transfer,
physical priors, etc. To emphasize the importance and reproducibility of
UDTL-based IFD, the whole test framework will be released to the research
community to facilitate future research. In summary, the released framework and
comparative study can serve as an extended interface and basic results to carry
out new studies on UDTL-based IFD. The code framework is available at
\url{https://github.com/ZhaoZhibin/UDTL}.
Related papers
- UDA-Bench: Revisiting Common Assumptions in Unsupervised Domain Adaptation Using a Standardized Framework [59.428668614618914]
We take a deeper look into the diverse factors that influence the efficacy of modern unsupervised domain adaptation (UDA) methods.
To facilitate our analysis, we first develop UDA-Bench, a novel PyTorch framework that standardizes training and evaluation for domain adaptation.
arXiv Detail & Related papers (2024-09-23T17:57:07Z) - Dataset Distillation from First Principles: Integrating Core Information Extraction and Purposeful Learning [10.116674195405126]
We argue that a precise characterization of the underlying optimization problem must specify the inference task associated with the application of interest.
Our formalization reveals novel applications of DD across different modeling environments.
We present numerical results for two case studies important in contemporary settings.
arXiv Detail & Related papers (2024-09-02T18:11:15Z) - Advancing 3D Point Cloud Understanding through Deep Transfer Learning: A Comprehensive Survey [3.929140365559557]
This paper provides a comprehensive overview of the latest techniques for understanding 3DPC using deep transfer learning (DTL) and domain adaptation (DA)
The paper covers various applications, such as 3DPC object detection, semantic labeling, segmentation, classification, registration, downsampling/upsampling, and denoising.
arXiv Detail & Related papers (2024-07-25T08:47:27Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - On-Device Domain Generalization [93.79736882489982]
Domain generalization is critical to on-device machine learning applications.
We find that knowledge distillation is a strong candidate for solving the problem.
We propose a simple idea called out-of-distribution knowledge distillation (OKD), which aims to teach the student how the teacher handles (synthetic) out-of-distribution data.
arXiv Detail & Related papers (2022-09-15T17:59:31Z) - Deep Unsupervised Domain Adaptation: A Review of Recent Advances and
Perspectives [16.68091981866261]
Unsupervised domain adaptation (UDA) is proposed to counter the performance drop on data in a target domain.
UDA has yielded promising results on natural image processing, video analysis, natural language processing, time-series data analysis, medical image analysis, etc.
arXiv Detail & Related papers (2022-08-15T20:05:07Z) - Do Deep Neural Networks Always Perform Better When Eating More Data? [82.6459747000664]
We design experiments from Identically Independent Distribution(IID) and Out of Distribution(OOD)
Under IID condition, the amount of information determines the effectivity of each sample, the contribution of samples and difference between classes determine the amount of class information.
Under OOD condition, the cross-domain degree of samples determine the contributions, and the bias-fitting caused by irrelevant elements is a significant factor of cross-domain.
arXiv Detail & Related papers (2022-05-30T15:40:33Z) - Context-Aware Drift Detection [0.0]
Two-sample tests of homogeneity form the foundation upon which existing approaches to drift detection build.
We develop a more general drift detection framework built upon a foundation of two-sample tests for conditional distributional treatment effects.
arXiv Detail & Related papers (2022-03-16T14:23:02Z) - f-Domain-Adversarial Learning: Theory and Algorithms [82.97698406515667]
Unsupervised domain adaptation is used in many machine learning applications where, during training, a model has access to unlabeled data in the target domain.
We derive a novel generalization bound for domain adaptation that exploits a new measure of discrepancy between distributions based on a variational characterization of f-divergences.
arXiv Detail & Related papers (2021-06-21T18:21:09Z) - Stance Detection Benchmark: How Robust Is Your Stance Detection? [65.91772010586605]
Stance Detection (StD) aims to detect an author's stance towards a certain topic or claim.
We introduce a StD benchmark that learns from ten StD datasets of various domains in a multi-dataset learning setting.
Within this benchmark setup, we are able to present new state-of-the-art results on five of the datasets.
arXiv Detail & Related papers (2020-01-06T13:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.