Navigating Healthcare Insights: A Birds Eye View of Explainability with
Knowledge Graphs
- URL: http://arxiv.org/abs/2309.16593v1
- Date: Thu, 28 Sep 2023 16:57:03 GMT
- Title: Navigating Healthcare Insights: A Birds Eye View of Explainability with
Knowledge Graphs
- Authors: Satvik Garg, Shivam Parikh, Somya Garg
- Abstract summary: Knowledge graphs (KGs) are gaining prominence in Healthcare AI, especially in drug discovery and pharmaceutical research.
This overview summarizes recent literature on the impact of KGs in healthcare and their role in developing explainable AI models.
We emphasize the importance of making KGs more interpretable through knowledge-infused learning in healthcare.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graphs (KGs) are gaining prominence in Healthcare AI, especially in
drug discovery and pharmaceutical research as they provide a structured way to
integrate diverse information sources, enhancing AI system interpretability.
This interpretability is crucial in healthcare, where trust and transparency
matter, and eXplainable AI (XAI) supports decision making for healthcare
professionals. This overview summarizes recent literature on the impact of KGs
in healthcare and their role in developing explainable AI models. We cover KG
workflow, including construction, relationship extraction, reasoning, and their
applications in areas like Drug-Drug Interactions (DDI), Drug Target
Interactions (DTI), Drug Development (DD), Adverse Drug Reactions (ADR), and
bioinformatics. We emphasize the importance of making KGs more interpretable
through knowledge-infused learning in healthcare. Finally, we highlight
research challenges and provide insights for future directions.
Related papers
- Enhancing Adverse Drug Event Detection with Multimodal Dataset: Corpus Creation and Model Development [12.258245804049114]
The mining of adverse drug events (ADEs) is pivotal in pharmacovigilance, enhancing patient safety.
Traditional ADE detection methods are reliable but slow, not easily adaptable to large-scale operations.
Previous ADE mining studies have focused on text-based methodologies, overlooking visual cues.
We present a MultiModal Adverse Drug Event (MMADE) detection dataset, merging ADE-related textual information with visual aids.
arXiv Detail & Related papers (2024-05-24T17:58:42Z) - Leveraging Generative AI for Clinical Evidence Summarization Needs to Ensure Trustworthiness [47.51360338851017]
Evidence-based medicine promises to improve the quality of healthcare by empowering medical decisions and practices with the best available evidence.
The rapid growth of medical evidence, which can be obtained from various sources, poses a challenge in collecting, appraising, and synthesizing the evidential information.
Recent advancements in generative AI, exemplified by large language models, hold promise in facilitating the arduous task.
arXiv Detail & Related papers (2023-11-19T03:29:45Z) - Designing Interpretable ML System to Enhance Trust in Healthcare: A Systematic Review to Proposed Responsible Clinician-AI-Collaboration Framework [13.215318138576713]
The paper reviews interpretable AI processes, methods, applications, and the challenges of implementation in healthcare.
It aims to foster a comprehensive understanding of the crucial role of a robust interpretability approach in healthcare.
arXiv Detail & Related papers (2023-11-18T12:29:18Z) - Emerging Drug Interaction Prediction Enabled by Flow-based Graph Neural
Network with Biomedical Network [69.16939798838159]
We propose EmerGNN, a graph neural network (GNN) that can effectively predict interactions for emerging drugs.
EmerGNN learns pairwise representations of drugs by extracting the paths between drug pairs, propagating information from one drug to the other, and incorporating the relevant biomedical concepts on the paths.
Overall, EmerGNN has higher accuracy than existing approaches in predicting interactions for emerging drugs and can identify the most relevant information on the biomedical network.
arXiv Detail & Related papers (2023-11-15T06:34:00Z) - The Impact of ChatGPT and LLMs on Medical Imaging Stakeholders:
Perspectives and Use Cases [9.488544611843073]
This study investigates the transformative potential of Large Language Models (LLMs), such as OpenAI ChatGPT, in medical imaging.
The paper introduces an analytic framework for presenting the complex interactions between LLMs and the broader ecosystem of medical imaging stakeholders.
arXiv Detail & Related papers (2023-06-11T20:39:13Z) - A Review on Knowledge Graphs for Healthcare: Resources, Applications, and Promises [52.31710895034573]
This work provides the first comprehensive review of healthcare knowledge graphs (HKGs)
It summarizes the pipeline and key techniques for HKG construction, as well as the common utilization approaches.
At the application level, we delve into the successful integration of HKGs across various health domains.
arXiv Detail & Related papers (2023-06-07T21:51:56Z) - A Brief Review of Explainable Artificial Intelligence in Healthcare [7.844015105790313]
XAI refers to the techniques and methods for building AI applications.
Model explainability and interpretability are vital successful deployment of AI models in healthcare practices.
arXiv Detail & Related papers (2023-04-04T05:41:57Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Scientific Language Models for Biomedical Knowledge Base Completion: An
Empirical Study [62.376800537374024]
We study scientific LMs for KG completion, exploring whether we can tap into their latent knowledge to enhance biomedical link prediction.
We integrate the LM-based models with KG embedding models, using a router method that learns to assign each input example to either type of model and provides a substantial boost in performance.
arXiv Detail & Related papers (2021-06-17T17:55:33Z) - Explainable AI meets Healthcare: A Study on Heart Disease Dataset [0.0]
The aim is to enlighten practitioners on the understandability and interpretability of explainable AI systems using a variety of techniques.
Our paper contains examples based on the heart disease dataset and elucidates on how the explainability techniques should be preferred to create trustworthiness.
arXiv Detail & Related papers (2020-11-06T05:18:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.