Explainable AI for Correct Root Cause Analysis of Product Quality in Injection Moulding
- URL: http://arxiv.org/abs/2505.01445v1
- Date: Tue, 29 Apr 2025 16:58:01 GMT
- Title: Explainable AI for Correct Root Cause Analysis of Product Quality in Injection Moulding
- Authors: Muhammad Muaz, Sameed Sajid, Tobias Schulze, Chang Liu, Nils Klasen, Benny Drescher,
- Abstract summary: This study first shows that the interactions among the multiple input machine settings do exist in real experimental data collected as per a central composite design.<n>Then, the model-agnostic explainable AI methods are compared for the first time to show that different explainability methods indeed lead to different feature impact analysis in injection moulding.
- Score: 2.3992545463376618
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: If a product deviates from its desired properties in the injection moulding process, its root cause analysis can be aided by models that relate the input machine settings with the output quality characteristics. The machine learning models tested in the quality prediction are mostly black boxes; therefore, no direct explanation of their prognosis is given, which restricts their applicability in the quality control. The previously attempted explainability methods are either restricted to tree-based algorithms only or do not emphasize on the fact that some explainability methods can lead to wrong root cause identification of a product's deviation from its desired properties. This study first shows that the interactions among the multiple input machine settings do exist in real experimental data collected as per a central composite design. Then, the model-agnostic explainable AI methods are compared for the first time to show that different explainability methods indeed lead to different feature impact analysis in injection moulding. Moreover, it is shown that the better feature attribution translates to the correct cause identification and actionable insights for the injection moulding process. Being model agnostic, explanations on both random forest and multilayer perceptron are performed for the cause analysis, as both models have the mean absolute percentage error of less than 0.05% on the experimental dataset.
Related papers
- Internal Causal Mechanisms Robustly Predict Language Model Out-of-Distribution Behaviors [61.92704516732144]
We show that the most robust features for correctness prediction are those that play a distinctive causal role in the model's behavior.<n>We propose two methods that leverage causal mechanisms to predict the correctness of model outputs.
arXiv Detail & Related papers (2025-05-17T00:31:39Z) - F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI [15.314388210699443]
XAI techniques can extract meaningful insights from deep learning models.<n>How to properly evaluate them remains an open problem.<n>We propose Fine-tuned Fidelity (F-Fidelity) as a robust evaluation framework for XAI.
arXiv Detail & Related papers (2024-10-03T20:23:06Z) - Explainability of Machine Learning Models under Missing Data [3.0485328005356136]
Missing data is a prevalent issue that can significantly impair model performance and explainability.<n>This paper briefly summarizes the development of the field of missing data and investigates the effects of various imputation methods on SHAP.
arXiv Detail & Related papers (2024-06-29T11:31:09Z) - Data-centric Prediction Explanation via Kernelized Stein Discrepancy [14.177012256360635]
This paper presents a Highly-precise and Data-centric Explanation (HD-Explain) prediction explanation method that exploits properties of Kernelized Stein Discrepancy (KSD)<n>Specifically, the KSD uniquely defines a parameterized kernel function for a trained model that encodes model-dependent data correlation.<n>We show that HD-Explain outperforms existing methods from various aspects, including preciseness (fine-grained explanation), consistency, and 3) computation efficiency.
arXiv Detail & Related papers (2024-03-22T19:04:02Z) - PhilaeX: Explaining the Failure and Success of AI Models in Malware
Detection [6.264663726458324]
An explanation to an AI model's prediction used to support decision making in cyber security, is of critical importance.
Most existing AI models lack the ability to provide explanations on their prediction results, despite their strong performance in most scenarios.
We propose a novel explainable AI method, called PhilaeX, that provides the means to identify the optimized subset of features to form the complete explanations of AI models' predictions.
arXiv Detail & Related papers (2022-07-02T05:06:24Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Variance Minimization in the Wasserstein Space for Invariant Causal
Prediction [72.13445677280792]
In this work, we show that the approach taken in ICP may be reformulated as a series of nonparametric tests that scales linearly in the number of predictors.
Each of these tests relies on the minimization of a novel loss function that is derived from tools in optimal transport theory.
We prove under mild assumptions that our method is able to recover the set of identifiable direct causes, and we demonstrate in our experiments that it is competitive with other benchmark causal discovery algorithms.
arXiv Detail & Related papers (2021-10-13T22:30:47Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Are Visual Explanations Useful? A Case Study in Model-in-the-Loop
Prediction [49.254162397086006]
We study explanations based on visual saliency in an image-based age prediction task.
We find that presenting model predictions improves human accuracy.
However, explanations of various kinds fail to significantly alter human accuracy or trust in the model.
arXiv Detail & Related papers (2020-07-23T20:39:40Z) - A Critical View of the Structural Causal Model [89.43277111586258]
We show that one can identify the cause and the effect without considering their interaction at all.
We propose a new adversarial training method that mimics the disentangled structure of the causal model.
Our multidimensional method outperforms the literature methods on both synthetic and real world datasets.
arXiv Detail & Related papers (2020-02-23T22:52:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.