Explainable AI models for predicting liquefaction-induced lateral spreading
- URL: http://arxiv.org/abs/2404.15959v1
- Date: Wed, 24 Apr 2024 16:25:52 GMT
- Title: Explainable AI models for predicting liquefaction-induced lateral spreading
- Authors: Cheng-Hsi Hsiao, Krishna Kumar, Ellen Rathje,
- Abstract summary: Machine learning can improve lateral spreading prediction models.
The "black box" nature of machine learning models can hinder their adoption in critical decision-making.
This work highlights the value of explainable machine learning for reliable and informed decision-making.
- Score: 1.6221957454728797
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Earthquake-induced liquefaction can cause substantial lateral spreading, posing threats to infrastructure. Machine learning (ML) can improve lateral spreading prediction models by capturing complex soil characteristics and site conditions. However, the "black box" nature of ML models can hinder their adoption in critical decision-making. This study addresses this limitation by using SHapley Additive exPlanations (SHAP) to interpret an eXtreme Gradient Boosting (XGB) model for lateral spreading prediction, trained on data from the 2011 Christchurch Earthquake. SHAP analysis reveals the factors driving the model's predictions, enhancing transparency and allowing for comparison with established engineering knowledge. The results demonstrate that the XGB model successfully identifies the importance of soil characteristics derived from Cone Penetration Test (CPT) data in predicting lateral spreading, validating its alignment with domain understanding. This work highlights the value of explainable machine learning for reliable and informed decision-making in geotechnical engineering and hazard assessment.
Related papers
- Influence Functions for Scalable Data Attribution in Diffusion Models [52.92223039302037]
Diffusion models have led to significant advancements in generative modelling.
Yet their widespread adoption poses challenges regarding data attribution and interpretability.
In this paper, we aim to help address such challenges by developing an textitinfluence functions framework.
arXiv Detail & Related papers (2024-10-17T17:59:02Z) - Developing an Explainable Artificial Intelligent (XAI) Model for Predicting Pile Driving Vibrations in Bangkok's Subsoil [0.0]
This study presents an explainable artificial intelligent (XAI) model for predicting pile driving vibrations in Bangkok's soft clay subsoil.
A deep neural network was developed using a dataset of 1,018 real-world pile driving measurements.
The model achieved a mean absolute error (MAE) of 0.276, outperforming traditional empirical methods.
arXiv Detail & Related papers (2024-09-08T10:13:35Z) - Operational range bounding of spectroscopy models with anomaly detection [0.0]
Isolation Forests are shown to effectively identify contexts where prediction models are likely to fail.
Best performance is seen when Isolation Forests model projections of the prediction model's explainability SHAP values.
arXiv Detail & Related papers (2024-08-05T15:59:36Z) - AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models [1.8752655643513647]
XAI tools can increase the vulnerability of model extraction attacks, which is a concern when model owners prefer black-box access.
We propose a novel retraining (learning) based model extraction attack framework against interpretable models under black-box settings.
We show that AUTOLYCUS is highly effective, requiring significantly fewer queries compared to state-of-the-art attacks.
arXiv Detail & Related papers (2023-02-04T13:23:39Z) - Estimate Deformation Capacity of Non-Ductile RC Shear Walls using
Explainable Boosting Machine [0.0]
This study aims to develop a fully explainable machine learning model to predict the deformation capacity of non-ductile reinforced concrete shear walls.
The proposed Explainable Boosting Machines (EBM)-based model is an interpretable, robust, naturally explainable glass-box model, yet provides high accuracy comparable to its black-box counterparts.
arXiv Detail & Related papers (2023-01-11T09:20:29Z) - Explainable Machine Learning for Hydrocarbon Prospect Risking [14.221460375400692]
We show how LIME can induce trust in model's decisions by revealing the decision-making process to be aligned to domain knowledge.
It has the potential to debug mispredictions made due to anomalous patterns in the data or faulty training datasets.
arXiv Detail & Related papers (2022-12-15T00:38:14Z) - Taming Overconfident Prediction on Unlabeled Data from Hindsight [50.9088560433925]
Minimizing prediction uncertainty on unlabeled data is a key factor to achieve good performance in semi-supervised learning.
This paper proposes a dual mechanism, named ADaptive Sharpening (ADS), which first applies a soft-threshold to adaptively mask out determinate and negligible predictions.
ADS significantly improves the state-of-the-art SSL methods by making it a plug-in.
arXiv Detail & Related papers (2021-12-15T15:17:02Z) - Spatial machine-learning model diagnostics: a model-agnostic
distance-based approach [91.62936410696409]
This contribution proposes spatial prediction error profiles (SPEPs) and spatial variable importance profiles (SVIPs) as novel model-agnostic assessment and interpretation tools.
The SPEPs and SVIPs of geostatistical methods, linear models, random forest, and hybrid algorithms show striking differences and also relevant similarities.
The novel diagnostic tools enrich the toolkit of spatial data science, and may improve ML model interpretation, selection, and design.
arXiv Detail & Related papers (2021-11-13T01:50:36Z) - Incorporating Causal Graphical Prior Knowledge into Predictive Modeling
via Simple Data Augmentation [92.96204497841032]
Causal graphs (CGs) are compact representations of the knowledge of the data generating processes behind the data distributions.
We propose a model-agnostic data augmentation method that allows us to exploit the prior knowledge of the conditional independence (CI) relations.
We experimentally show that the proposed method is effective in improving the prediction accuracy, especially in the small-data regime.
arXiv Detail & Related papers (2021-02-27T06:13:59Z) - Graph Embedding with Data Uncertainty [113.39838145450007]
spectral-based subspace learning is a common data preprocessing step in many machine learning pipelines.
Most subspace learning methods do not take into consideration possible measurement inaccuracies or artifacts that can lead to data with high uncertainty.
arXiv Detail & Related papers (2020-09-01T15:08:23Z) - Semiparametric Bayesian Forecasting of Spatial Earthquake Occurrences [77.68028443709338]
We propose a fully Bayesian formulation of the Epidemic Type Aftershock Sequence (ETAS) model.
The occurrence of the mainshock earthquakes in a geographical region is assumed to follow an inhomogeneous spatial point process.
arXiv Detail & Related papers (2020-02-05T10:11:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.