Explainable artificial intelligence for Healthcare applications using
Random Forest Classifier with LIME and SHAP
- URL: http://arxiv.org/abs/2311.05665v1
- Date: Thu, 9 Nov 2023 11:43:10 GMT
- Title: Explainable artificial intelligence for Healthcare applications using
Random Forest Classifier with LIME and SHAP
- Authors: Mrutyunjaya Panda, Soumya Ranjan Mahanta
- Abstract summary: There is a pressing need to understand the computational details hidden in black box AI techniques.
The origin of explainable AI (xAI) is coined from these challenges.
This book provides an in-depth analysis of several xAI frameworks and methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the advances in computationally efficient artificial Intelligence (AI)
techniques and their numerous applications in our everyday life, there is a
pressing need to understand the computational details hidden in black box AI
techniques such as most popular machine learning and deep learning techniques;
through more detailed explanations. The origin of explainable AI (xAI) is
coined from these challenges and recently gained more attention by the
researchers by adding explainability comprehensively in traditional AI systems.
This leads to develop an appropriate framework for successful applications of
xAI in real life scenarios with respect to innovations, risk mitigation,
ethical issues and logical values to the users. In this book chapter, an
in-depth analysis of several xAI frameworks and methods including LIME (Local
Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive
exPlanations) are provided. Random Forest Classifier as black box AI is used on
a publicly available Diabetes symptoms dataset with LIME and SHAP for better
interpretations. The results obtained are interesting in terms of transparency,
valid and trustworthiness in diabetes disease prediction.
Related papers
- Explainable Artificial Intelligence Techniques for Accurate Fault Detection and Diagnosis: A Review [0.0]
We review the eXplainable AI (XAI) tools and techniques in this context.
We focus on their role in making AI decision-making transparent, particularly in critical scenarios where humans are involved.
We discuss current limitations and potential future research that aims to balance explainability with model performance.
arXiv Detail & Related papers (2024-04-17T17:49:38Z) - An Explainable AI Framework for Artificial Intelligence of Medical
Things [2.7774194651211217]
We leverage a custom XAI framework, incorporating techniques such as Local Interpretable Model-Agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), and Gradient-weighted Class Activation Mapping (Grad-Cam)
The proposed framework enhances the effectiveness of strategic healthcare methods and aims to instill trust and promote understanding in AI-driven medical applications.
We apply the XAI framework to brain tumor detection as a use case demonstrating accurate and transparent diagnosis.
arXiv Detail & Related papers (2024-03-07T01:08:41Z) - ChatGPT-HealthPrompt. Harnessing the Power of XAI in Prompt-Based
Healthcare Decision Support using ChatGPT [15.973406739758856]
This study presents an innovative approach to the application of large language models (LLMs) in clinical decision-making, focusing on OpenAI's ChatGPT.
Our approach introduces the use of contextual prompts-strategically designed to include task description, feature description, and crucially, integration of domain knowledge-for high-quality binary classification tasks even in data-scarce scenarios.
arXiv Detail & Related papers (2023-08-17T20:50:46Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - A Brief Review of Explainable Artificial Intelligence in Healthcare [7.844015105790313]
XAI refers to the techniques and methods for building AI applications.
Model explainability and interpretability are vital successful deployment of AI models in healthcare practices.
arXiv Detail & Related papers (2023-04-04T05:41:57Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Unbox the Black-box for the Medical Explainable AI via Multi-modal and
Multi-centre Data Fusion: A Mini-Review, Two Showcases and Beyond [3.4031539425106683]
Explainable Artificial Intelligence (XAI) is an emerging research topic of machine learning aimed at unboxing how AI systems' black-box choices are made.
Many of the machine learning algorithms can not manifest how and why a decision has been cast.
XAI becomes more and more crucial for deep learning powered applications, especially for medical and healthcare studies.
arXiv Detail & Related papers (2021-02-03T10:56:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.