Explainable Machine Learning for Hydrocarbon Prospect Risking
- URL: http://arxiv.org/abs/2212.07563v1
- Date: Thu, 15 Dec 2022 00:38:14 GMT
- Title: Explainable Machine Learning for Hydrocarbon Prospect Risking
- Authors: Ahmad Mustafa, and Ghassan AlRegib
- Abstract summary: We show how LIME can induce trust in model's decisions by revealing the decision-making process to be aligned to domain knowledge.
It has the potential to debug mispredictions made due to anomalous patterns in the data or faulty training datasets.
- Score: 14.221460375400692
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hydrocarbon prospect risking is a critical application in geophysics
predicting well outcomes from a variety of data including geological,
geophysical, and other information modalities. Traditional routines require
interpreters to go through a long process to arrive at the probability of
success of specific outcomes. AI has the capability to automate the process but
its adoption has been limited thus far owing to a lack of transparency in the
way complicated, black box models generate decisions. We demonstrate how LIME
-- a model-agnostic explanation technique -- can be used to inject trust in
model decisions by uncovering the model's reasoning process for individual
predictions. It generates these explanations by fitting interpretable models in
the local neighborhood of specific datapoints being queried. On a dataset of
well outcomes and corresponding geophysical attribute data, we show how LIME
can induce trust in model's decisions by revealing the decision-making process
to be aligned to domain knowledge. Further, it has the potential to debug
mispredictions made due to anomalous patterns in the data or faulty training
datasets.
Related papers
- Influence Functions for Scalable Data Attribution in Diffusion Models [52.92223039302037]
Diffusion models have led to significant advancements in generative modelling.
Yet their widespread adoption poses challenges regarding data attribution and interpretability.
In this paper, we aim to help address such challenges by developing an textitinfluence functions framework.
arXiv Detail & Related papers (2024-10-17T17:59:02Z) - Overlap Number of Balls Model-Agnostic CounterFactuals (ONB-MACF): A Data-Morphology-based Counterfactual Generation Method for Trustworthy Artificial Intelligence [15.415120542032547]
XAI seeks to make AI systems more understandable and trustworthy.
This work analyses the value of data morphology strategies in generating counterfactual explanations.
It introduces the Overlap Number of Balls Model-Agnostic CounterFactuals (ONB-MACF) method.
arXiv Detail & Related papers (2024-05-20T18:51:42Z) - Explainable AI models for predicting liquefaction-induced lateral spreading [1.6221957454728797]
Machine learning can improve lateral spreading prediction models.
The "black box" nature of machine learning models can hinder their adoption in critical decision-making.
This work highlights the value of explainable machine learning for reliable and informed decision-making.
arXiv Detail & Related papers (2024-04-24T16:25:52Z) - LaPLACE: Probabilistic Local Model-Agnostic Causal Explanations [1.0370398945228227]
We introduce LaPLACE-explainer, designed to provide probabilistic cause-and-effect explanations for machine learning models.
The LaPLACE-Explainer component leverages the concept of a Markov blanket to establish statistical boundaries between relevant and non-relevant features.
Our approach offers causal explanations and outperforms LIME and SHAP in terms of local accuracy and consistency of explained features.
arXiv Detail & Related papers (2023-10-01T04:09:59Z) - Topological Interpretability for Deep-Learning [0.30806551485143496]
Deep learning (DL) models cannot quantify the certainty of their predictions.
This work presents a method to infer prominent features in two DL classification models trained on clinical and non-clinical text.
arXiv Detail & Related papers (2023-05-15T13:38:13Z) - Hessian-based toolbox for reliable and interpretable machine learning in
physics [58.720142291102135]
We present a toolbox for interpretability and reliability, extrapolation of the model architecture.
It provides a notion of the influence of the input data on the prediction at a given test point, an estimation of the uncertainty of the model predictions, and an agnostic score for the model predictions.
Our work opens the road to the systematic use of interpretability and reliability methods in ML applied to physics and, more generally, science.
arXiv Detail & Related papers (2021-08-04T16:32:59Z) - Explainable AI Enabled Inspection of Business Process Prediction Models [2.5229940062544496]
We present an approach that allows us to use model explanations to investigate certain reasoning applied by machine learned predictions.
A novel contribution of our approach is the proposal of model inspection that leverages both the explanations generated by interpretable machine learning mechanisms and the contextual or domain knowledge extracted from event logs that record historical process execution.
arXiv Detail & Related papers (2021-07-16T06:51:18Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Incorporating Causal Graphical Prior Knowledge into Predictive Modeling
via Simple Data Augmentation [92.96204497841032]
Causal graphs (CGs) are compact representations of the knowledge of the data generating processes behind the data distributions.
We propose a model-agnostic data augmentation method that allows us to exploit the prior knowledge of the conditional independence (CI) relations.
We experimentally show that the proposed method is effective in improving the prediction accuracy, especially in the small-data regime.
arXiv Detail & Related papers (2021-02-27T06:13:59Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - Graph Embedding with Data Uncertainty [113.39838145450007]
spectral-based subspace learning is a common data preprocessing step in many machine learning pipelines.
Most subspace learning methods do not take into consideration possible measurement inaccuracies or artifacts that can lead to data with high uncertainty.
arXiv Detail & Related papers (2020-09-01T15:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.