On the application of transfer learning in prognostics and health
management
- URL: http://arxiv.org/abs/2007.01965v1
- Date: Fri, 3 Jul 2020 23:35:18 GMT
- Title: On the application of transfer learning in prognostics and health
management
- Authors: Ramin Moradi, Katrina M. Groth
- Abstract summary: Data availability has encouraged researchers and industry practitioners to rely on data-based machine learning.
Deep learning, models for fault diagnostics and prognostics more than ever.
These models provide unique advantages, however, their performance is heavily dependent on the training data and how well that data represents the test data.
transfer learning is an approach that can remedy this issue by keeping portions of what is learned from previous training and transferring them to the new application.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advancements in sensing and computing technologies, the development of human
and computer interaction frameworks, big data storage capabilities, and the
emergence of cloud storage and could computing have resulted in an abundance of
data in the modern industry. This data availability has encouraged researchers
and industry practitioners to rely on data-based machine learning, especially
deep learning, models for fault diagnostics and prognostics more than ever.
These models provide unique advantages, however, their performance is heavily
dependent on the training data and how well that data represents the test data.
This issue mandates fine-tuning and even training the models from scratch when
there is a slight change in operating conditions or equipment. Transfer
learning is an approach that can remedy this issue by keeping portions of what
is learned from previous training and transferring them to the new application.
In this paper, a unified definition for transfer learning and its different
types is provided, Prognostics and Health Management (PHM) studies that have
used transfer learning are reviewed in detail, and finally, a discussion on
transfer learning application considerations and gaps is provided for improving
the applicability of transfer learning in PHM.
Related papers
- An Experimental Comparison of Transfer Learning against Self-supervised Learning [6.744847405966574]
This paper compares the performance and robustness of transfer learning and self-supervised learning in the medical field.
We tested data with several common issues in medical domains, such as data imbalance, data scarcity, and domain mismatch.
We provide recommendations to help users apply transfer learning and self-supervised learning methods in medical areas.
arXiv Detail & Related papers (2024-07-08T04:14:52Z) - Knowledge-Reuse Transfer Learning Methods in Molecular and Material Science [9.966301355582747]
Machine learning (ML) methods based on big data are expected to break this dilemma.
The application of transfer learning lowers the data requirements for model training.
We focus on the application of transfer learning methods for the discovery of advanced molecules/materials.
arXiv Detail & Related papers (2024-03-02T12:41:25Z) - Robust Machine Learning by Transforming and Augmenting Imperfect
Training Data [6.928276018602774]
This thesis explores several data sensitivities of modern machine learning.
We first discuss how to prevent ML from codifying prior human discrimination measured in the training data.
We then discuss the problem of learning from data containing spurious features, which provide predictive fidelity during training but are unreliable upon deployment.
arXiv Detail & Related papers (2023-12-19T20:49:28Z) - Fast Machine Unlearning Without Retraining Through Selective Synaptic
Dampening [51.34904967046097]
Selective Synaptic Dampening (SSD) is a fast, performant, and does not require long-term storage of the training data.
We present a novel two-step, post hoc, retrain-free approach to machine unlearning which is fast, performant, and does not require long-term storage of the training data.
arXiv Detail & Related papers (2023-08-15T11:30:45Z) - A Data-Based Perspective on Transfer Learning [76.30206800557411]
We take a closer look at the role of the source dataset's composition in transfer learning.
Our framework gives rise to new capabilities such as pinpointing transfer learning brittleness.
arXiv Detail & Related papers (2022-07-12T17:58:28Z) - BERT WEAVER: Using WEight AVERaging to enable lifelong learning for
transformer-based models in biomedical semantic search engines [49.75878234192369]
We present WEAVER, a simple, yet efficient post-processing method that infuses old knowledge into the new model.
We show that applying WEAVER in a sequential manner results in similar word embedding distributions as doing a combined training on all data at once.
arXiv Detail & Related papers (2022-02-21T10:34:41Z) - Deep invariant networks with differentiable augmentation layers [87.22033101185201]
Methods for learning data augmentation policies require held-out data and are based on bilevel optimization problems.
We show that our approach is easier and faster to train than modern automatic data augmentation techniques.
arXiv Detail & Related papers (2022-02-04T14:12:31Z) - A Concise Review of Transfer Learning [1.5771347525430772]
Transfer learning aims to boost the performance of a target learner by applying another related source data.
Traditional machine learning and data mining techniques assume that the training and testing data lie from the same feature space and distribution.
arXiv Detail & Related papers (2021-04-05T20:34:55Z) - What is being transferred in transfer learning? [51.6991244438545]
We show that when training from pre-trained weights, the model stays in the same basin in the loss landscape.
We present that when training from pre-trained weights, the model stays in the same basin in the loss landscape and different instances of such model are similar in feature space and close in parameter space.
arXiv Detail & Related papers (2020-08-26T17:23:40Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.