Deep Transfer Learning for Industrial Automation: A Review and
Discussion of New Techniques for Data-Driven Machine Learning
- URL: http://arxiv.org/abs/2012.03301v1
- Date: Sun, 6 Dec 2020 15:58:22 GMT
- Title: Deep Transfer Learning for Industrial Automation: A Review and
Discussion of New Techniques for Data-Driven Machine Learning
- Authors: Benjamin Maschler and Michael Weyrich
- Abstract summary: In this article, the concepts of transfer and continual learning are introduced.
The article reveals promising approaches for industrial deep transfer learning, utilizing methods of both classes of algorithms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this article, the concepts of transfer and continual learning are
introduced. The ensuing review reveals promising approaches for industrial deep
transfer learning, utilizing methods of both classes of algorithms. In the
field of computer vision, it is already state-of-the-art. In others, e.g. fault
prediction, it is barely starting. However, over all fields, the abstract
differentiation between continual and transfer learning is not benefitting
their practical use. In contrast, both should be brought together to create
robust learning algorithms fulfilling the industrial automation sector's
requirements. To better describe these requirements, base use cases of
industrial transfer learning are introduced.
Related papers
- Bayesian Transfer Learning [13.983016833412307]
"Transfer learning" seeks to improve inference and/or predictive accuracy on a domain of interest by leveraging data from related domains.
This article highlights Bayesian approaches to transfer learning, which have received relatively limited attention despite their innate compatibility with the notion of drawing upon prior knowledge to guide new learning tasks.
We discuss how these methods address the problem of finding the optimal information to transfer between domains, which is a central question in transfer learning.
arXiv Detail & Related papers (2023-12-20T23:38:17Z) - LLM4Drive: A Survey of Large Language Models for Autonomous Driving [62.10344445241105]
Large language models (LLMs) have demonstrated abilities including understanding context, logical reasoning, and generating answers.
In this paper, we systematically review a research line about textitLarge Language Models for Autonomous Driving (LLM4AD).
arXiv Detail & Related papers (2023-11-02T07:23:33Z) - Sim2real Transfer Learning for Point Cloud Segmentation: An Industrial
Application Case on Autonomous Disassembly [55.41644538483948]
We present an industrial application case that uses sim2real transfer learning for point cloud data.
We provide insights on how to generate and process synthetic point cloud data.
A novel patch-based attention network is proposed additionally to tackle this problem.
arXiv Detail & Related papers (2023-01-12T14:00:37Z) - Towards Machine Learning for Placement and Routing in Chip Design: a
Methodological Overview [72.79089075263985]
Placement and routing are two indispensable and challenging (NP-hard) tasks in modern chip design flows.
Machine learning has shown promising prospects by its data-driven nature, which can be of less reliance on knowledge and priors.
arXiv Detail & Related papers (2022-02-28T06:28:44Z) - Omni-Training for Data-Efficient Deep Learning [80.28715182095975]
Recent advances reveal that a properly pre-trained model endows an important property: transferability.
A tight combination of pre-training and meta-training cannot achieve both kinds of transferability.
This motivates the proposed Omni-Training framework towards data-efficient deep learning.
arXiv Detail & Related papers (2021-10-14T16:30:36Z) - Empirically Measuring Transfer Distance for System Design and Operation [2.9864637081333085]
We show that transfer learning algorithms have little, if any, examples from which to learn.
We consider the use of transfer distance in the design of machine rebuild procedures to allow for transferable prognostic models.
Practitioners can use the presented methodology to design and operate systems with consideration for the learning theoretic challenges faced by component learning systems.
arXiv Detail & Related papers (2021-07-02T16:45:58Z) - Automated Machine Learning Techniques for Data Streams [91.3755431537592]
This paper surveys the state-of-the-art open-source AutoML tools, applies them to data collected from streams, and measures how their performance changes over time.
The results show that off-the-shelf AutoML tools can provide satisfactory results but in the presence of concept drift, detection or adaptation techniques have to be applied to maintain the predictive accuracy over time.
arXiv Detail & Related papers (2021-06-14T11:42:46Z) - Transfer Learning as an Enabler of the Intelligent Digital Twin [0.0]
Digital Twins have been described as beneficial in many areas, such as virtual commissioning, fault prediction or reconfiguration planning.
This article presents several cross-phase industrial transfer learning use cases utilizing intelligent Digital Twins.
arXiv Detail & Related papers (2020-12-03T13:51:05Z) - What is being transferred in transfer learning? [51.6991244438545]
We show that when training from pre-trained weights, the model stays in the same basin in the loss landscape.
We present that when training from pre-trained weights, the model stays in the same basin in the loss landscape and different instances of such model are similar in feature space and close in parameter space.
arXiv Detail & Related papers (2020-08-26T17:23:40Z) - Transfer learning extensions for the probabilistic classification vector
machine [1.6244541005112747]
We propose two transfer learning extensions integrated into the sparse and interpretable probabilistic classification vector machine.
They are compared to standard benchmarks in the field and show their relevance either by sparsity or performance improvements.
arXiv Detail & Related papers (2020-07-11T08:35:10Z) - On the application of transfer learning in prognostics and health
management [0.0]
Data availability has encouraged researchers and industry practitioners to rely on data-based machine learning.
Deep learning, models for fault diagnostics and prognostics more than ever.
These models provide unique advantages, however, their performance is heavily dependent on the training data and how well that data represents the test data.
transfer learning is an approach that can remedy this issue by keeping portions of what is learned from previous training and transferring them to the new application.
arXiv Detail & Related papers (2020-07-03T23:35:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.