Now You See It, Now You Dont: Adversarial Vulnerabilities in
Computational Pathology
- URL: http://arxiv.org/abs/2106.08153v2
- Date: Wed, 16 Jun 2021 20:34:31 GMT
- Title: Now You See It, Now You Dont: Adversarial Vulnerabilities in
Computational Pathology
- Authors: Alex Foote, Amina Asif, Ayesha Azam, Tim Marshall-Cox, Nasir Rajpoot
and Fayyaz Minhas
- Abstract summary: We show that a highly accurate model for classification of tumour patches in pathology images can easily be attacked with minimal perturbations.
Our analytical results show that it is possible to generate single-instance white-box attacks on specific input images with high success rate and low perturbation energy.
We systematically analyze the relationship between perturbation energy of an adversarial attack, its impact on morphological constructs of clinical significance, their perceptibility by a trained pathologist and saliency maps obtained using deep learning models.
- Score: 2.1577322127603407
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning models are routinely employed in computational pathology
(CPath) for solving problems of diagnostic and prognostic significance.
Typically, the generalization performance of CPath models is analyzed using
evaluation protocols such as cross-validation and testing on multi-centric
cohorts. However, to ensure that such CPath solutions are robust and safe for
use in a clinical setting, a critical analysis of their predictive performance
and vulnerability to adversarial attacks is required, which is the focus of
this paper. Specifically, we show that a highly accurate model for
classification of tumour patches in pathology images (AUC > 0.95) can easily be
attacked with minimal perturbations which are imperceptible to lay humans and
trained pathologists alike. Our analytical results show that it is possible to
generate single-instance white-box attacks on specific input images with high
success rate and low perturbation energy. Furthermore, we have also generated a
single universal perturbation matrix using the training dataset only which,
when added to unseen test images, results in forcing the trained neural network
to flip its prediction labels with high confidence at a success rate of > 84%.
We systematically analyze the relationship between perturbation energy of an
adversarial attack, its impact on morphological constructs of clinical
significance, their perceptibility by a trained pathologist and saliency maps
obtained using deep learning models. Based on our analysis, we strongly
recommend that computational pathology models be critically analyzed using the
proposed adversarial validation strategy prior to clinical adoption.
Related papers
- Investigating the Impact of Histopathological Foundation Models on Regressive Prediction of Homologous Recombination Deficiency [52.50039435394964]
We systematically evaluate foundation models for regression-based tasks.<n>We extract patch-level features from whole slide images (WSI) using five state-of-the-art foundation models.<n>Models are trained to predict continuous HRD scores based on these extracted features across breast, endometrial, and lung cancer cohorts.
arXiv Detail & Related papers (2026-01-29T14:06:50Z) - Preventing Shortcut Learning in Medical Image Analysis through Intermediate Layer Knowledge Distillation from Specialist Teachers [0.0]
Deep learning models are prone to learning shortcuts to problems using spuriously correlated yet irrelevant features of their training data.<n>In high-risk applications such as medical image analysis, this phenomenon may prevent models from using clinically meaningful features when making predictions.<n>We propose a novel knowledge distillation framework that leverages a teacher network fine-tuned on a small subset of task-relevant data to mitigate shortcut learning.
arXiv Detail & Related papers (2025-11-21T17:18:35Z) - Efficient Automated Diagnosis of Retinopathy of Prematurity by Customize CNN Models [0.0]
We focus on refining and evaluating CNN-based approaches for precise and efficient ROP detection.<n>Results underscore the supremacy of tailored CNN models over pre-trained counterparts, evident in heightened accuracy and F1-scores.<n>We showcase the feasibility of deploying these models within dedicated software and hardware configurations, highlighting their utility as valuable diagnostic aids in clinical settings.
arXiv Detail & Related papers (2025-11-13T07:00:54Z) - Iterative Misclassification Error Training (IMET): An Optimized Neural Network Training Technique for Image Classification [0.5115559623386964]
We introduce Iterative Misclassification Error Training (IMET), a novel framework inspired by curriculum learning and coreset selection.<n>IMET aims to identify misclassified samples in order to streamline the training process, while prioritizing the model's attention to edge case senarious and rare outcomes.<n>The paper evaluates IMET's performance on benchmark medical image classification datasets against state-of-the-art ResNet architectures.
arXiv Detail & Related papers (2025-07-01T04:14:16Z) - AUTOCT: Automating Interpretable Clinical Trial Prediction with LLM Agents [47.640779069547534]
AutoCT is a novel framework that combines the reasoning capabilities of large language models with the explainability of classical machine learning.<n>We show that AutoCT performs on par with or better than SOTA methods on clinical trial prediction tasks within only a limited number of self-refinement iterations.
arXiv Detail & Related papers (2025-06-04T11:50:55Z) - Adaptive Deep Learning for Multiclass Breast Cancer Classification via Misprediction Risk Analysis [0.8028869343053783]
Early detection is crucial for improving patient outcomes.
Computer-aided diagnostic approaches have significantly enhanced breast cancer detection.
However, these methods face challenges in multiclass classification, leading to frequent mispredictions.
arXiv Detail & Related papers (2025-03-17T03:25:28Z) - The Skin Game: Revolutionizing Standards for AI Dermatology Model Comparison [0.6144680854063939]
Deep Learning approaches in dermatological image classification have shown promising results, yet the field faces significant methodological challenges that impede proper evaluation.
This paper presents a systematic analysis of current methodological practices in skin disease classification research, revealing substantial inconsistencies in data preparation, augmentation strategies, and performance reporting.
We propose comprehensive methodological recommendations for model development, evaluation, and clinical deployment, emphasizing rigorous data preparation, systematic error analysis, and specialized protocols for different image types.
arXiv Detail & Related papers (2025-02-04T17:15:36Z) - Pitfalls of topology-aware image segmentation [81.19923502845441]
We identify critical pitfalls in model evaluation that include inadequate connectivity choices, overlooked topological artifacts, and inappropriate use of evaluation metrics.
We propose a set of actionable recommendations to establish fair and robust evaluation standards for topology-aware medical image segmentation methods.
arXiv Detail & Related papers (2024-12-19T08:11:42Z) - Transformer-Based Self-Supervised Learning for Histopathological Classification of Ischemic Stroke Clot Origin [0.0]
Identifying the thromboembolism source in ischemic stroke is crucial for treatment and secondary prevention.
This study describes a self-supervised deep learning approach in digital pathology of emboli for classifying ischemic stroke clot origin.
arXiv Detail & Related papers (2024-05-01T23:40:12Z) - Model X-ray:Detecting Backdoored Models via Decision Boundary [62.675297418960355]
Backdoor attacks pose a significant security vulnerability for deep neural networks (DNNs)
We propose Model X-ray, a novel backdoor detection approach based on the analysis of illustrated two-dimensional (2D) decision boundaries.
Our approach includes two strategies focused on the decision areas dominated by clean samples and the concentration of label distribution.
arXiv Detail & Related papers (2024-02-27T12:42:07Z) - Multitask Deep Learning for Accurate Risk Stratification and Prediction
of Next Steps for Coronary CT Angiography Patients [26.50934421749854]
We propose a multi-task deep learning model to support risk stratification and down-stream test selection.
Our model achieved an Area Under the receiver operating characteristic Curve (AUC) of 0.76 in CAD risk stratification, and 0.72 AUC in predicting downstream tests.
arXiv Detail & Related papers (2023-09-01T08:34:13Z) - TREEMENT: Interpretable Patient-Trial Matching via Personalized Dynamic
Tree-Based Memory Network [54.332862955411656]
Clinical trials are critical for drug development but often suffer from expensive and inefficient patient recruitment.
In recent years, machine learning models have been proposed for speeding up patient recruitment via automatically matching patients with clinical trials.
We introduce a dynamic tree-based memory network model named TREEMENT to provide accurate and interpretable patient trial matching.
arXiv Detail & Related papers (2023-07-19T12:35:09Z) - On Feature Learning in the Presence of Spurious Correlations [45.86963293019703]
We show that the quality learned feature representations is greatly affected by the design decisions beyond the method.
We significantly improve upon the best results reported in the literature on the popular Waterbirds, Celeb hair color prediction and WILDS-FMOW problems.
arXiv Detail & Related papers (2022-10-20T16:10:28Z) - Trustworthy Visual Analytics in Clinical Gait Analysis: A Case Study for
Patients with Cerebral Palsy [43.55994393060723]
gaitXplorer is a visual analytics approach for the classification of CP-related gait patterns.
It integrates Grad-CAM, a well-established explainable artificial intelligence algorithm, for explanations of machine learning classifications.
arXiv Detail & Related papers (2022-08-10T09:21:28Z) - Benchmarking Heterogeneous Treatment Effect Models through the Lens of
Interpretability [82.29775890542967]
Estimating personalized effects of treatments is a complex, yet pervasive problem.
Recent developments in the machine learning literature on heterogeneous treatment effect estimation gave rise to many sophisticated, but opaque, tools.
We use post-hoc feature importance methods to identify features that influence the model's predictions.
arXiv Detail & Related papers (2022-06-16T17:59:05Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Quality control for more reliable integration of deep learning-based
image segmentation into medical workflows [0.23609258021376836]
We present an analysis of state-of-the-art automatic quality control (QC) approaches to estimate the certainty of their outputs.
We validated the most promising approaches on a brain image segmentation task identifying white matter hyperintensities (WMH) in magnetic resonance imaging data.
arXiv Detail & Related papers (2021-12-06T16:30:43Z) - Lung Cancer Lesion Detection in Histopathology Images Using Graph-Based
Sparse PCA Network [93.22587316229954]
We propose a graph-based sparse principal component analysis (GS-PCA) network, for automated detection of cancerous lesions on histological lung slides stained by hematoxylin and eosin (H&E)
We evaluate the performance of the proposed algorithm on H&E slides obtained from an SVM K-rasG12D lung cancer mouse model using precision/recall rates, F-score, Tanimoto coefficient, and area under the curve (AUC) of the receiver operator characteristic (ROC)
arXiv Detail & Related papers (2021-10-27T19:28:36Z) - Towards Evaluating the Robustness of Deep Diagnostic Models by
Adversarial Attack [38.480886577088384]
Recent studies have shown deep diagnostic models may not be robust in the inference process.
Adversarial example is a well-designed perturbation that is not easily perceived by humans.
We have designed two new defense methods to handle adversarial examples in deep diagnostic models.
arXiv Detail & Related papers (2021-03-05T02:24:47Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.