Knowledge Informed Machine Learning using a Weibull-based Loss Function
- URL: http://arxiv.org/abs/2201.01769v1
- Date: Tue, 4 Jan 2022 22:53:14 GMT
- Title: Knowledge Informed Machine Learning using a Weibull-based Loss Function
- Authors: Tim von Hahn and Chris K Mechefske
- Abstract summary: knowledge informed machine learning can be enhanced through the integration of external knowledge.
A knowledge informed machine learning technique is demonstrated, using the common IMS and PRONOSTIA bearing data sets.
A thorough statistical analysis of the Weibull-based loss function is conducted, demonstrating the effectiveness of the method on the PRONOSTIA data set.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning can be enhanced through the integration of external
knowledge. This method, called knowledge informed machine learning, is also
applicable within the field of Prognostics and Health Management (PHM). In this
paper, the various methods of knowledge informed machine learning, from a PHM
context, are reviewed with the goal of helping the reader understand the
domain. In addition, a knowledge informed machine learning technique is
demonstrated, using the common IMS and PRONOSTIA bearing data sets, for
remaining useful life (RUL) prediction. Specifically, knowledge is garnered
from the field of reliability engineering which is represented through the
Weibull distribution. The knowledge is then integrated into a neural network
through a novel Weibull-based loss function. A thorough statistical analysis of
the Weibull-based loss function is conducted, demonstrating the effectiveness
of the method on the PRONOSTIA data set. However, the Weibull-based loss
function is less effective on the IMS data set. The results, shortcomings, and
benefits of the approach are discussed in length. Finally, all the code is
publicly available for the benefit of other researchers.
Related papers
- Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models [79.28821338925947]
Domain-Class Incremental Learning is a realistic but challenging continual learning scenario.
To handle these diverse tasks, pre-trained Vision-Language Models (VLMs) are introduced for their strong generalizability.
This incurs a new problem: the knowledge encoded in the pre-trained VLMs may be disturbed when adapting to new tasks, compromising their inherent zero-shot ability.
Existing methods tackle it by tuning VLMs with knowledge distillation on extra datasets, which demands heavy overhead.
We propose the Distribution-aware Interference-free Knowledge Integration (DIKI) framework, retaining pre-trained knowledge of
arXiv Detail & Related papers (2024-07-07T12:19:37Z) - Informed Meta-Learning [55.2480439325792]
Meta-learning and informed ML stand out as two approaches for incorporating prior knowledge into ML pipelines.
We formalise a hybrid paradigm, informed meta-learning, facilitating the incorporation of priors from unstructured knowledge representations.
We demonstrate the potential benefits of informed meta-learning in improving data efficiency, robustness to observational noise and task distribution shifts.
arXiv Detail & Related papers (2024-02-25T15:08:37Z) - Information Leakage Detection through Approximate Bayes-optimal Prediction [22.04308347355652]
Information leakage (IL) involves unintentionally exposing sensitive information to unauthorized parties.
Conventional statistical approaches rely on estimating mutual information between observable and secret information for detecting ILs.
We establish a theoretical framework using statistical learning theory and information theory to quantify and detect IL accurately.
arXiv Detail & Related papers (2024-01-25T16:15:27Z) - Model-Driven Engineering Method to Support the Formalization of Machine
Learning using SysML [0.0]
This work introduces a method supporting the collaborative definition of machine learning tasks by leveraging model-based engineering.
The method supports the identification and integration of various data sources, the required definition of semantic connections between data attributes, and the definition of data processing steps.
arXiv Detail & Related papers (2023-07-10T11:33:46Z) - A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA [67.75989848202343]
This paper presents a unified end-to-end retriever-reader framework towards knowledge-based VQA.
We shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning.
Our scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering.
arXiv Detail & Related papers (2022-06-30T02:35:04Z) - Let's Go to the Alien Zoo: Introducing an Experimental Framework to
Study Usability of Counterfactual Explanations for Machine Learning [6.883906273999368]
Counterfactual explanations (CFEs) have gained traction as a psychologically grounded approach to generate post-hoc explanations.
We introduce the Alien Zoo, an engaging, web-based and game-inspired experimental framework.
As a proof of concept, we demonstrate the practical efficacy and feasibility of this approach in a user study.
arXiv Detail & Related papers (2022-05-06T17:57:05Z) - A Quantitative Perspective on Values of Domain Knowledge for Machine
Learning [27.84415856657607]
Domain knowledge in various forms has been playing a crucial role in improving the learning performance.
We study the problem of quantifying the values of domain knowledge in terms of its contribution to the learning performance.
arXiv Detail & Related papers (2020-11-17T06:12:23Z) - A Human-in-the-Loop Approach based on Explainability to Improve NTL
Detection [0.12183405753834559]
This work explains our human-in-the-loop approach to mitigate problems in a real system that uses a supervised model to detect Non-Technical Losses (NTL)
This approach exploits human knowledge (e.g. from the data scientists or the company's stakeholders) and the information provided by explanatory methods to guide the system during the training process.
The results show that the derived prediction model is better in terms of accuracy, interpretability, robustness and flexibility.
arXiv Detail & Related papers (2020-09-28T16:04:07Z) - Estimating Structural Target Functions using Machine Learning and
Influence Functions [103.47897241856603]
We propose a new framework for statistical machine learning of target functions arising as identifiable functionals from statistical models.
This framework is problem- and model-agnostic and can be used to estimate a broad variety of target parameters of interest in applied statistics.
We put particular focus on so-called coarsening at random/doubly robust problems with partially unobserved information.
arXiv Detail & Related papers (2020-08-14T16:48:29Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z) - Privileged Information Dropout in Reinforcement Learning [56.82218103971113]
Using privileged information during training can improve the sample efficiency and performance of machine learning systems.
In this work, we investigate Privileged Information Dropout (pid) for achieving the latter which can be applied equally to value-based and policy-based reinforcement learning algorithms.
arXiv Detail & Related papers (2020-05-19T05:32:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.