From Explainability to Action: A Generative Operational Framework for Integrating XAI in Clinical Mental Health Screening
- URL: http://arxiv.org/abs/2510.13828v1
- Date: Fri, 10 Oct 2025 05:46:39 GMT
- Title: From Explainability to Action: A Generative Operational Framework for Integrating XAI in Clinical Mental Health Screening
- Authors: Ratna Kandala, Akshata Kishore Moharir, Divya Arvinda Nayak,
- Abstract summary: This paper argues that this gap is a translation problem and proposes the Generative Operational Framework.<n>This framework is designed to ingest the raw, technical outputs from diverse XAI tools and synthesize them with clinical guidelines.
- Score: 0.3181700357675698
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable Artificial Intelligence (XAI) has been presented as the critical component for unlocking the potential of machine learning in mental health screening (MHS). However, a persistent lab-to-clinic gap remains. Current XAI techniques, such as SHAP and LIME, excel at producing technically faithful outputs such as feature importance scores, but fail to deliver clinically relevant, actionable insights that can be used by clinicians or understood by patients. This disconnect between technical transparency and human utility is the primary barrier to real-world adoption. This paper argues that this gap is a translation problem and proposes the Generative Operational Framework, a novel system architecture that leverages Large Language Models (LLMs) as a central translation engine. This framework is designed to ingest the raw, technical outputs from diverse XAI tools and synthesize them with clinical guidelines (via RAG) to automatically generate human-readable, evidence-backed clinical narratives. To justify our solution, we provide a systematic analysis of the components it integrates, tracing the evolution from intrinsic models to generative XAI. We demonstrate how this framework directly addresses key operational barriers, including workflow integration, bias mitigation, and stakeholder-specific communication. This paper also provides a strategic roadmap for moving the field beyond the generation of isolated data points toward the delivery of integrated, actionable, and trustworthy AI in clinical practice.
Related papers
- A Model-Driven Engineering Approach to AI-Powered Healthcare Platforms [0.03262230127283451]
We introduce a model driven engineering (MDE) framework designed specifically for healthcare AI.<n>The framework relies on formal metamodels, domain-specific languages, and automated transformations to move from high level specifications to running software.<n>We evaluate this approach in a multi center cancer immunotherapy study.
arXiv Detail & Related papers (2025-10-10T12:00:12Z) - Retrieval-Augmented Framework for LLM-Based Clinical Decision Support [0.19999259391104388]
This paper proposes a clinical decision support system powered by Large Language Models (LLMs) to assist prescribing clinicians.<n>The framework integrates natural language processing with structured clinical inputs to produce contextually relevant recommendations.<n>We outline the system's technical components, including representation representation alignment and generation strategies.
arXiv Detail & Related papers (2025-10-01T18:45:25Z) - Interpretable Clinical Classification with Kolgomorov-Arnold Networks [70.72819760172744]
Kolmogorov-Arnold Networks (KANs) offer intrinsic interpretability through transparent, symbolic representations.<n>KANs support built-in patient-level insights, intuitive visualizations, and nearest-patient retrieval.<n>These results position KANs as a promising step toward trustworthy AI that clinicians can understand, audit, and act upon.
arXiv Detail & Related papers (2025-09-20T17:21:58Z) - Generative Artificial Intelligence in Medical Imaging: Foundations, Progress, and Clinical Translation [14.306027161664565]
Generative artificial intelligence (AI) is rapidly transforming medical imaging.<n>Generative AI contributes to key stages of the imaging continuum from acquisition and reconstruction to cross-modality synthesis.<n>This review aims to guide future research and foster interdisciplinary collaboration at the intersection of AI, medicine, and biomedical engineering.
arXiv Detail & Related papers (2025-08-07T07:58:40Z) - NEARL-CLIP: Interacted Query Adaptation with Orthogonal Regularization for Medical Vision-Language Understanding [51.63264715941068]
textbfNEARL-CLIP (iunderlineNteracted quunderlineEry underlineAdaptation with ounderlineRthogonaunderlineL Regularization) is a novel cross-modality interaction VLM-based framework.
arXiv Detail & Related papers (2025-08-06T05:44:01Z) - Medical Reasoning in the Era of LLMs: A Systematic Review of Enhancement Techniques and Applications [59.721265428780946]
Large Language Models (LLMs) in medicine have enabled impressive capabilities, yet a critical gap remains in their ability to perform systematic, transparent, and verifiable reasoning.<n>This paper provides the first systematic review of this emerging field.<n>We propose a taxonomy of reasoning enhancement techniques, categorized into training-time strategies and test-time mechanisms.
arXiv Detail & Related papers (2025-08-01T14:41:31Z) - RadFabric: Agentic AI System with Reasoning Capability for Radiology [61.25593938175618]
RadFabric is a multi agent, multimodal reasoning framework that unifies visual and textual analysis for comprehensive CXR interpretation.<n>System employs specialized CXR agents for pathology detection, an Anatomical Interpretation Agent to map visual findings to precise anatomical structures, and a Reasoning Agent powered by large multimodal reasoning models to synthesize visual, anatomical, and clinical data into transparent and evidence based diagnoses.
arXiv Detail & Related papers (2025-06-17T03:10:33Z) - GAMedX: Generative AI-based Medical Entity Data Extractor Using Large Language Models [1.123722364748134]
This paper introduces GAMedX, a Named Entity Recognition (NER) approach utilizing Large Language Models (LLMs)
The methodology integrates open-source LLMs for NER, utilizing chained prompts and Pydantic schemas for structured output to navigate the complexities of specialized medical jargon.
The findings reveal significant ROUGE F1 score on one of the evaluation datasets with an accuracy of 98%.
arXiv Detail & Related papers (2024-05-31T02:53:22Z) - Transparent AI: Developing an Explainable Interface for Predicting Postoperative Complications [1.6609516435725236]
We propose an Explainable AI (XAI) framework designed to answer five critical questions.
We incorporated various techniques such as Local Interpretable Model-agnostic Explanations (LIME)
We showcased an XAI interface prototype that adheres to this framework for predicting major postoperative complications.
arXiv Detail & Related papers (2024-04-18T21:01:27Z) - Designing Interpretable ML System to Enhance Trust in Healthcare: A Systematic Review to Proposed Responsible Clinician-AI-Collaboration Framework [13.215318138576713]
The paper reviews interpretable AI processes, methods, applications, and the challenges of implementation in healthcare.
It aims to foster a comprehensive understanding of the crucial role of a robust interpretability approach in healthcare.
arXiv Detail & Related papers (2023-11-18T12:29:18Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.