Safeguarding Patient Trust in the Age of AI: Tackling Health Misinformation with Explainable AI
- URL: http://arxiv.org/abs/2509.04052v1
- Date: Thu, 04 Sep 2025 09:29:34 GMT
- Title: Safeguarding Patient Trust in the Age of AI: Tackling Health Misinformation with Explainable AI
- Authors: Sueun Hong, Shuojie Fu, Ovidiu Serban, Brianna Bao, James Kinross, Francesa Toni, Guy Martin, Uddhav Vaghela,
- Abstract summary: This white paper presents an explainable AI framework developed through the EPSRC INDICATE project to combat medical misinformation.<n>Our systematic review of 17 studies reveals the urgent need for transparent AI systems in healthcare.
- Score: 0.6323908398583084
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: AI-generated health misinformation poses unprecedented threats to patient safety and healthcare system trust globally. This white paper presents an explainable AI framework developed through the EPSRC INDICATE project to combat medical misinformation while enhancing evidence-based healthcare delivery. Our systematic review of 17 studies reveals the urgent need for transparent AI systems in healthcare. The proposed solution demonstrates 95% recall in clinical evidence retrieval and integrates novel trustworthiness classifiers achieving 76% F1 score in detecting biomedical misinformation. Results show that explainable AI can transform traditional 6-month expert review processes into real-time, automated evidence synthesis while maintaining clinical rigor. This approach offers a critical intervention to preserve healthcare integrity in the AI era.
Related papers
- Industrialized Deception: The Collateral Effects of LLM-Generated Misinformation on Digital Ecosystems [47.03825808787752]
This paper transitions from literature review to practical countermeasures.<n>We report on improved AI-generated content through Large Language Models (LLMs) and multimodal systems.<n>We discuss mitigation strategies including LLM-based detection, inoculation approaches, and the dual-use nature of generative AI.
arXiv Detail & Related papers (2026-01-29T16:42:22Z) - DispatchMAS: Fusing taxonomy and artificial intelligence agents for emergency medical services [49.70819009392778]
Large Language Models (LLMs) and Multi-Agent Systems (MAS) offer opportunities to augment dispatchers.<n>This study aimed to develop and evaluate a taxonomy-grounded, multi-agent system for simulating realistic scenarios.
arXiv Detail & Related papers (2025-10-24T08:01:21Z) - The doctor will polygraph you now: ethical concerns with AI for fact-checking patients [0.23248585800296404]
Artificial intelligence (AI) methods have been proposed for the prediction of social behaviors.
This raises novel ethical concerns about respect, privacy, and control over patient data.
arXiv Detail & Related papers (2024-08-15T02:55:30Z) - Leveraging Generative AI for Clinical Evidence Summarization Needs to Ensure Trustworthiness [47.51360338851017]
Evidence-based medicine promises to improve the quality of healthcare by empowering medical decisions and practices with the best available evidence.
The rapid growth of medical evidence, which can be obtained from various sources, poses a challenge in collecting, appraising, and synthesizing the evidential information.
Recent advancements in generative AI, exemplified by large language models, hold promise in facilitating the arduous task.
arXiv Detail & Related papers (2023-11-19T03:29:45Z) - Designing Interpretable ML System to Enhance Trust in Healthcare: A Systematic Review to Proposed Responsible Clinician-AI-Collaboration Framework [13.215318138576713]
The paper reviews interpretable AI processes, methods, applications, and the challenges of implementation in healthcare.
It aims to foster a comprehensive understanding of the crucial role of a robust interpretability approach in healthcare.
arXiv Detail & Related papers (2023-11-18T12:29:18Z) - The impact of responding to patient messages with large language model
assistance [4.243020918808522]
Documentation burden is a major contributor to clinician burnout.
Many hospitals are actively integrating such systems into electronic medical record systems.
We are the first to examine the utility of large language models in assisting clinicians draft responses to patient questions.
arXiv Detail & Related papers (2023-10-26T18:03:46Z) - FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare [73.78776682247187]
Concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI.
This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.
arXiv Detail & Related papers (2023-08-11T10:49:05Z) - COVID-Net USPro: An Open-Source Explainable Few-Shot Deep Prototypical
Network to Monitor and Detect COVID-19 Infection from Point-of-Care
Ultrasound Images [66.63200823918429]
COVID-Net USPro monitors and detects COVID-19 positive cases with high precision and recall from minimal ultrasound images.
The network achieves 99.65% overall accuracy, 99.7% recall and 99.67% precision for COVID-19 positive cases when trained with only 5 shots.
arXiv Detail & Related papers (2023-01-04T16:05:51Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Explainable AI meets Healthcare: A Study on Heart Disease Dataset [0.0]
The aim is to enlighten practitioners on the understandability and interpretability of explainable AI systems using a variety of techniques.
Our paper contains examples based on the heart disease dataset and elucidates on how the explainability techniques should be preferred to create trustworthiness.
arXiv Detail & Related papers (2020-11-06T05:18:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.