A Robust Learning Methodology for Uncertainty-aware Scientific Machine
Learning models
- URL: http://arxiv.org/abs/2209.01900v1
- Date: Mon, 5 Sep 2022 10:56:58 GMT
- Title: A Robust Learning Methodology for Uncertainty-aware Scientific Machine
Learning models
- Authors: Erbet Costa Almeida, Carine de Menezes Rebello, Marcio Fontana, Leizer
Schnitman, Idelfonso Bessa dos Reis Nogueira
- Abstract summary: This work proposes a comprehensive methodology for uncertainty evaluation of the SciML.
The uncertainties considered in the proposed method are the absence of theory and causal models, the sensitiveness to data corruption or imperfection, and the computational effort.
The methodology is validated through a case study, developing a Soft Sensor for a polymerization reactor.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robust learning is an important issue in Scientific Machine Learning (SciML).
There are several works in the literature addressing this topic. However, there
is an increasing demand for methods that can simultaneously consider all the
different uncertainty components involved in SciML model identification. Hence,
this work proposes a comprehensive methodology for uncertainty evaluation of
the SciML that also considers several possible sources of uncertainties
involved in the identification process. The uncertainties considered in the
proposed method are the absence of theory and causal models, the sensitiveness
to data corruption or imperfection, and the computational effort. Therefore, it
was possible to provide an overall strategy for the uncertainty-aware models in
the SciML field. The methodology is validated through a case study, developing
a Soft Sensor for a polymerization reactor. The results demonstrated that the
identified Soft Sensor are robust for uncertainties, corroborating with the
consistency of the proposed approach.
Related papers
- Navigating Uncertainties in Machine Learning for Structural Dynamics: A Comprehensive Review of Probabilistic and Non-Probabilistic Approaches in Forward and Inverse Problems [0.0]
This paper presents a comprehensive review on navigating uncertainties in machine learning (ML)
It lists uncertainty-aware approaches into probabilistic methods and non-probabilistic methods.
The review aims to assist researchers and practitioners in making informed decisions when utilizing ML techniques to address uncertainties in structural dynamic problems.
arXiv Detail & Related papers (2024-08-16T09:43:01Z) - Kernel Language Entropy: Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities [79.9629927171974]
Uncertainty in Large Language Models (LLMs) is crucial for applications where safety and reliability are important.
We propose Kernel Language Entropy (KLE), a novel method for uncertainty estimation in white- and black-box LLMs.
arXiv Detail & Related papers (2024-05-30T12:42:05Z) - A Methodology to Identify Physical or Computational Experiment Conditions for Uncertainty Mitigation [0.0]
This paper introduces a methodology for designing computational or physical experiments for system-level uncertainty mitigation purposes.
The proposed methodology is versatile enough to tackle uncertainty management across various design challenges.
arXiv Detail & Related papers (2024-05-22T18:59:42Z) - Machine Learning Robustness: A Primer [12.426425119438846]
The discussion begins with a detailed definition of robustness, portraying it as the ability of ML models to maintain stable performance across varied and unexpected environmental conditions.
The chapter delves into the factors that impede robustness, such as data bias, model complexity, and the pitfalls of underspecified ML pipelines.
The discussion progresses to explore amelioration strategies for bolstering robustness, starting with data-centric approaches like debiasing and augmentation.
arXiv Detail & Related papers (2024-04-01T03:49:42Z) - Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling [69.83976050879318]
In large language models (LLMs), identifying sources of uncertainty is an important step toward improving reliability, trustworthiness, and interpretability.
In this paper, we introduce an uncertainty decomposition framework for LLMs, called input clarification ensembling.
Our approach generates a set of clarifications for the input, feeds them into an LLM, and ensembles the corresponding predictions.
arXiv Detail & Related papers (2023-11-15T05:58:35Z) - A Discrepancy Aware Framework for Robust Anomaly Detection [51.710249807397695]
We present a Discrepancy Aware Framework (DAF), which demonstrates robust performance consistently with simple and cheap strategies.
Our method leverages an appearance-agnostic cue to guide the decoder in identifying defects, thereby alleviating its reliance on synthetic appearance.
Under the simple synthesis strategies, it outperforms existing methods by a large margin. Furthermore, it also achieves the state-of-the-art localization performance.
arXiv Detail & Related papers (2023-10-11T15:21:40Z) - A Meta-heuristic Approach to Estimate and Explain Classifier Uncertainty [0.4264192013842096]
This work proposes a set of class-independent meta-heuristics that can characterize the complexity of an instance in terms of factors are mutually relevant to both human and machine learning decision-making.
The proposed measures and framework hold promise for improving model development for more complex instances, as well as providing a new means of model abstention and explanation.
arXiv Detail & Related papers (2023-04-20T13:09:28Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - Variance based sensitivity analysis for Monte Carlo and importance
sampling reliability assessment with Gaussian processes [0.0]
We propose a methodology to quantify the sensitivity of the probability of failure estimator to two uncertainty sources.
This analysis also enables to control the whole error associated to the failure probability estimate and thus provides an accuracy criterion on the estimation.
The approach is proposed for both a Monte Carlo based method as well as an importance sampling based method, seeking to improve the estimation of rare event probabilities.
arXiv Detail & Related papers (2020-11-30T17:06:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.