Secure Diagnostics: Adversarial Robustness Meets Clinical Interpretability
- URL: http://arxiv.org/abs/2504.05483v1
- Date: Mon, 07 Apr 2025 20:26:02 GMT
- Title: Secure Diagnostics: Adversarial Robustness Meets Clinical Interpretability
- Authors: Mohammad Hossein Najafi, Mohammad Morsali, Mohammadreza Pashanejad, Saman Soleimani Roudi, Mohammad Norouzi, Saeed Bagheri Shouraki,
- Abstract summary: Deep neural networks for medical image classification often fail to generalize consistently in clinical practice.<n>This paper examines interpretability in deep neural networks fine-tuned for fracture detection by evaluating model performance against adversarial attack.
- Score: 9.522045116604358
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks for medical image classification often fail to generalize consistently in clinical practice due to violations of the i.i.d. assumption and opaque decision-making. This paper examines interpretability in deep neural networks fine-tuned for fracture detection by evaluating model performance against adversarial attack and comparing interpretability methods to fracture regions annotated by an orthopedic surgeon. Our findings prove that robust models yield explanations more aligned with clinically meaningful areas, indicating that robustness encourages anatomically relevant feature prioritization. We emphasize the value of interpretability for facilitating human-AI collaboration, in which models serve as assistants under a human-in-the-loop paradigm: clinically plausible explanations foster trust, enable error correction, and discourage reliance on AI for high-stakes decisions. This paper investigates robustness and interpretability as complementary benchmarks for bridging the gap between benchmark performance and safe, actionable clinical deployment.
Related papers
- Efficient Epistemic Uncertainty Estimation in Cerebrovascular Segmentation [1.3980986259786223]
We introduce an efficient ensemble model combining the advantages of Bayesian Approximation and Deep Ensembles.<n>Areas of high model uncertainty and erroneous predictions are aligned which demonstrates the effectiveness and reliability of the approach.
arXiv Detail & Related papers (2025-03-28T09:39:37Z) - Attribute Regularized Soft Introspective Variational Autoencoder for
Interpretable Cardiac Disease Classification [2.4828003234992666]
Interpretability is essential to ensure that clinicians can comprehend and trust artificial intelligence models.
We propose a novel interpretable approach that combines attribute regularization of the latent space within the framework of an adversarially trained variational autoencoder.
arXiv Detail & Related papers (2023-12-14T13:20:57Z) - Hypergraph Convolutional Networks for Fine-grained ICU Patient
Similarity Analysis and Risk Prediction [15.06049250330114]
The Intensive Care Unit (ICU) is one of the most important parts of a hospital, which admits critically ill patients and provides continuous monitoring and treatment.
Various patient outcome prediction methods have been attempted to assist healthcare professionals in clinical decision-making.
arXiv Detail & Related papers (2023-08-24T05:26:56Z) - TREEMENT: Interpretable Patient-Trial Matching via Personalized Dynamic
Tree-Based Memory Network [54.332862955411656]
Clinical trials are critical for drug development but often suffer from expensive and inefficient patient recruitment.
In recent years, machine learning models have been proposed for speeding up patient recruitment via automatically matching patients with clinical trials.
We introduce a dynamic tree-based memory network model named TREEMENT to provide accurate and interpretable patient trial matching.
arXiv Detail & Related papers (2023-07-19T12:35:09Z) - Automatic diagnosis of knee osteoarthritis severity using Swin
transformer [55.01037422579516]
Knee osteoarthritis (KOA) is a widespread condition that can cause chronic pain and stiffness in the knee joint.
We propose an automated approach that employs the Swin Transformer to predict the severity of KOA.
arXiv Detail & Related papers (2023-07-10T09:49:30Z) - Assisting clinical practice with fuzzy probabilistic decision trees [2.0999441362198907]
We propose FPT, a novel method that combines probabilistic trees and fuzzy logic to assist clinical practice.
We show that FPT and its predictions can assist clinical practice in an intuitive manner, with the use of a user-friendly interface specifically designed for this purpose.
arXiv Detail & Related papers (2023-04-16T14:05:16Z) - Confidence-Driven Deep Learning Framework for Early Detection of Knee Osteoarthritis [8.193689534916988]
Knee Osteoarthritis (KOA) is a prevalent musculoskeletal disorder that severely impacts mobility and quality of life.
We propose a confidence-driven deep learning framework for early KOA detection, focusing on distinguishing KL-0 and KL-2 stages.
Experimental results demonstrate that the proposed framework achieves competitive accuracy, sensitivity, and specificity, comparable to those of expert radiologists.
arXiv Detail & Related papers (2023-03-23T11:57:50Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - This Patient Looks Like That Patient: Prototypical Networks for
Interpretable Diagnosis Prediction from Clinical Text [56.32427751440426]
In clinical practice such models must not only be accurate, but provide doctors with interpretable and helpful results.
We introduce ProtoPatient, a novel method based on prototypical networks and label-wise attention.
We evaluate the model on two publicly available clinical datasets and show that it outperforms existing baselines.
arXiv Detail & Related papers (2022-10-16T10:12:07Z) - Exploring Robustness of Unsupervised Domain Adaptation in Semantic
Segmentation [74.05906222376608]
We propose adversarial self-supervision UDA (or ASSUDA) that maximizes the agreement between clean images and their adversarial examples by a contrastive loss in the output space.
This paper is rooted in two observations: (i) the robustness of UDA methods in semantic segmentation remains unexplored, which pose a security concern in this field; and (ii) although commonly used self-supervision (e.g., rotation and jigsaw) benefits image tasks such as classification and recognition, they fail to provide the critical supervision signals that could learn discriminative representation for segmentation tasks.
arXiv Detail & Related papers (2021-05-23T01:50:44Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.