Estimate Deformation Capacity of Non-Ductile RC Shear Walls using
Explainable Boosting Machine
- URL: http://arxiv.org/abs/2301.04652v1
- Date: Wed, 11 Jan 2023 09:20:29 GMT
- Title: Estimate Deformation Capacity of Non-Ductile RC Shear Walls using
Explainable Boosting Machine
- Authors: Zeynep Tuna Deger, Gulsen Taskin Kaya, John W Wallace
- Abstract summary: This study aims to develop a fully explainable machine learning model to predict the deformation capacity of non-ductile reinforced concrete shear walls.
The proposed Explainable Boosting Machines (EBM)-based model is an interpretable, robust, naturally explainable glass-box model, yet provides high accuracy comparable to its black-box counterparts.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning is becoming increasingly prevalent for tackling challenges
in earthquake engineering and providing fairly reliable and accurate
predictions. However, it is mostly unclear how decisions are made because
machine learning models are generally highly sophisticated, resulting in opaque
black-box models. Machine learning models that are naturally interpretable and
provide their own decision explanation, rather than using an explanatory, are
more accurate in determining what the model actually computes. With this
motivation, this study aims to develop a fully explainable machine learning
model to predict the deformation capacity of non-ductile reinforced concrete
shear walls based on experimental data collected worldwide. The proposed
Explainable Boosting Machines (EBM)-based model is an interpretable, robust,
naturally explainable glass-box model, yet provides high accuracy comparable to
its black-box counterparts. The model enables the user to observe the
relationship between the wall properties and the deformation capacity by
quantifying the individual contribution of each wall property as well as the
correlations among them. The mean coefficient of determination R2 and the mean
ratio of predicted to actual value based on the test dataset are 0.92 and 1.05,
respectively. The proposed predictive model stands out with its overall
consistency with scientific knowledge, practicality, and interpretability
without sacrificing high accuracy.
Related papers
- Explainable AI models for predicting liquefaction-induced lateral spreading [1.6221957454728797]
Machine learning can improve lateral spreading prediction models.
The "black box" nature of machine learning models can hinder their adoption in critical decision-making.
This work highlights the value of explainable machine learning for reliable and informed decision-making.
arXiv Detail & Related papers (2024-04-24T16:25:52Z) - SLEM: Machine Learning for Path Modeling and Causal Inference with Super
Learner Equation Modeling [3.988614978933934]
Causal inference is a crucial goal of science, enabling researchers to arrive at meaningful conclusions using observational data.
Path models, Structural Equation Models (SEMs) and Directed Acyclic Graphs (DAGs) provide a means to unambiguously specify assumptions regarding the causal structure underlying a phenomenon.
We propose Super Learner Equation Modeling, a path modeling technique integrating machine learning Super Learner ensembles.
arXiv Detail & Related papers (2023-08-08T16:04:42Z) - Preserving Knowledge Invariance: Rethinking Robustness Evaluation of
Open Information Extraction [50.62245481416744]
We present the first benchmark that simulates the evaluation of open information extraction models in the real world.
We design and annotate a large-scale testbed in which each example is a knowledge-invariant clique.
By further elaborating the robustness metric, a model is judged to be robust if its performance is consistently accurate on the overall cliques.
arXiv Detail & Related papers (2023-05-23T12:05:09Z) - Large Language Models with Controllable Working Memory [64.71038763708161]
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP)
What further sets these models apart is the massive amounts of world knowledge they internalize during pretraining.
How the model's world knowledge interacts with the factual information presented in the context remains under explored.
arXiv Detail & Related papers (2022-11-09T18:58:29Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Glass-box model representation of seismic failure mode prediction for
conventional RC shear walls [0.0]
This study proposes a glass-box (interpretable) classification model to predict the seismic failure mode of conventional reinforced concrete shear walls.
The trade-off between model complexity and model interpretability was discussed using eight Machine Learning (ML) methods.
The proposed model aims to provide engineers interpretable, robust, and rapid prediction in seismic performance assessment.
arXiv Detail & Related papers (2021-11-12T10:21:54Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Physics-Integrated Variational Autoencoders for Robust and Interpretable
Generative Modeling [86.9726984929758]
We focus on the integration of incomplete physics models into deep generative models.
We propose a VAE architecture in which a part of the latent space is grounded by physics.
We demonstrate generative performance improvements over a set of synthetic and real-world datasets.
arXiv Detail & Related papers (2021-02-25T20:28:52Z) - Deducing neighborhoods of classes from a fitted model [68.8204255655161]
In this article a new kind of interpretable machine learning method is presented.
It can help to understand the partitioning of the feature space into predicted classes in a classification model using quantile shifts.
Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed.
arXiv Detail & Related papers (2020-09-11T16:35:53Z) - Dynamic Knowledge Distillation for Black-box Hypothesis Transfer
Learning [20.533564478224967]
We introduce a novel algorithm called dynamic knowledge distillation for hypothesis transfer learning (dkdHTL)
In this method, we use knowledge distillation with instance-wise weighting mechanism to adaptively transfer the "dark" knowledge from the source hypothesis to the target domain.
Empirical results on both transfer learning benchmark datasets and a healthcare dataset demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2020-07-24T05:19:08Z) - Explainable Deep Modeling of Tabular Data using TableGraphNet [1.376408511310322]
We propose a new architecture that produces explainable predictions in the form of additive feature attributions.
We show that our explainable model attains the same level of performance as black box models.
arXiv Detail & Related papers (2020-02-12T20:02:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.