Physics-Informed Deep Learning: A Promising Technique for System
Reliability Assessment
- URL: http://arxiv.org/abs/2108.10828v1
- Date: Tue, 24 Aug 2021 16:24:46 GMT
- Title: Physics-Informed Deep Learning: A Promising Technique for System
Reliability Assessment
- Authors: Taotao Zhou, Enrique Lopez Droguett, Ali Mosleh
- Abstract summary: There is limited study on the utilization of deep learning for system reliability assessment.
We present an approach to frame system reliability assessment in the context of physics-informed deep learning.
The proposed approach is demonstrated by three numerical examples involving a dual-processor computing system.
- Score: 1.847740135967371
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Considerable research has been devoted to deep learning-based predictive
models for system prognostics and health management in the reliability and
safety community. However, there is limited study on the utilization of deep
learning for system reliability assessment. This paper aims to bridge this gap
and explore this new interface between deep learning and system reliability
assessment by exploiting the recent advances of physics-informed deep learning.
Particularly, we present an approach to frame system reliability assessment in
the context of physics-informed deep learning and discuss the potential value
of physics-informed generative adversarial networks for the uncertainty
quantification and measurement data incorporation in system reliability
assessment. The proposed approach is demonstrated by three numerical examples
involving a dual-processor computing system. The results indicate the potential
value of physics-informed deep learning to alleviate computational challenges
and combine measurement data and mathematical models for system reliability
assessment.
Related papers
- Between Randomness and Arbitrariness: Some Lessons for Reliable Machine Learning at Scale [2.50194939587674]
dissertation: quantifying and mitigating sources of arbitiness in ML, randomness in uncertainty estimation and optimization algorithms, in order to achieve scalability without sacrificing reliability.
dissertation serves as an empirical proof by example that research on reliable measurement for machine learning is intimately bound up with research in law and policy.
arXiv Detail & Related papers (2024-06-13T19:29:37Z) - Self-consistent Validation for Machine Learning Electronic Structure [81.54661501506185]
Method integrates machine learning with self-consistent field methods to achieve both low validation cost and interpret-ability.
This, in turn, enables exploration of the model's ability with active learning and instills confidence in its integration into real-world studies.
arXiv Detail & Related papers (2024-02-15T18:41:35Z) - Scalable and Efficient Methods for Uncertainty Estimation and Reduction
in Deep Learning [0.0]
This paper explores scalable and efficient methods for uncertainty estimation and reduction in deep learning.
We tackle the inherent uncertainties arising from out-of-distribution inputs and hardware non-idealities.
Our approach encompasses problem-aware training algorithms, novel NN topologies, and hardware co-design solutions.
arXiv Detail & Related papers (2024-01-13T19:30:34Z) - A Holistic Assessment of the Reliability of Machine Learning Systems [30.638615396429536]
This paper proposes a holistic assessment methodology for the reliability of machine learning (ML) systems.
Our framework evaluates five key properties: in-distribution accuracy, distribution-shift robustness, adversarial robustness, calibration, and out-of-distribution detection.
To provide insights into the performance of different algorithmic approaches, we identify and categorize state-of-the-art techniques.
arXiv Detail & Related papers (2023-07-20T05:00:13Z) - Holistic Adversarial Robustness of Deep Learning Models [91.34155889052786]
Adversarial robustness studies the worst-case performance of a machine learning model to ensure safety and reliability.
This paper provides a comprehensive overview of research topics and foundational principles of research methods for adversarial robustness of deep learning models.
arXiv Detail & Related papers (2022-02-15T05:30:27Z) - Assessment of DeepONet for reliability analysis of stochastic nonlinear
dynamical systems [4.301367153728694]
We investigate the efficacy of recently proposed DeepONet in solving time dependent reliability analysis and uncertainty quantification of systems subjected to loading.
Unlike conventional machine learning and deep learning algorithms, DeepONet learns is a operator network and learns a function to function mapping.
arXiv Detail & Related papers (2022-01-31T11:41:08Z) - An Extensible Benchmark Suite for Learning to Simulate Physical Systems [60.249111272844374]
We introduce a set of benchmark problems to take a step towards unified benchmarks and evaluation protocols.
We propose four representative physical systems, as well as a collection of both widely used classical time-based and representative data-driven methods.
arXiv Detail & Related papers (2021-08-09T17:39:09Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - A Safety Framework for Critical Systems Utilising Deep Neural Networks [13.763070043077633]
This paper presents a principled novel safety argument framework for critical systems that utilise deep neural networks.
The approach allows various forms of predictions, e.g., future reliability of passing some demands, or confidence on a required reliability level.
It is supported by a Bayesian analysis using operational data and the recent verification and validation techniques for deep learning.
arXiv Detail & Related papers (2020-03-07T23:35:05Z) - Interpretable Off-Policy Evaluation in Reinforcement Learning by
Highlighting Influential Transitions [48.91284724066349]
Off-policy evaluation in reinforcement learning offers the chance of using observational data to improve future outcomes in domains such as healthcare and education.
Traditional measures such as confidence intervals may be insufficient due to noise, limited data and confounding.
We develop a method that could serve as a hybrid human-AI system, to enable human experts to analyze the validity of policy evaluation estimates.
arXiv Detail & Related papers (2020-02-10T00:26:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.