Now You See It, Now You Dont: Adversarial Vulnerabilities in
Computational Pathology
- URL: http://arxiv.org/abs/2106.08153v2
- Date: Wed, 16 Jun 2021 20:34:31 GMT
- Title: Now You See It, Now You Dont: Adversarial Vulnerabilities in
Computational Pathology
- Authors: Alex Foote, Amina Asif, Ayesha Azam, Tim Marshall-Cox, Nasir Rajpoot
and Fayyaz Minhas
- Abstract summary: We show that a highly accurate model for classification of tumour patches in pathology images can easily be attacked with minimal perturbations.
Our analytical results show that it is possible to generate single-instance white-box attacks on specific input images with high success rate and low perturbation energy.
We systematically analyze the relationship between perturbation energy of an adversarial attack, its impact on morphological constructs of clinical significance, their perceptibility by a trained pathologist and saliency maps obtained using deep learning models.
- Score: 2.1577322127603407
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning models are routinely employed in computational pathology
(CPath) for solving problems of diagnostic and prognostic significance.
Typically, the generalization performance of CPath models is analyzed using
evaluation protocols such as cross-validation and testing on multi-centric
cohorts. However, to ensure that such CPath solutions are robust and safe for
use in a clinical setting, a critical analysis of their predictive performance
and vulnerability to adversarial attacks is required, which is the focus of
this paper. Specifically, we show that a highly accurate model for
classification of tumour patches in pathology images (AUC > 0.95) can easily be
attacked with minimal perturbations which are imperceptible to lay humans and
trained pathologists alike. Our analytical results show that it is possible to
generate single-instance white-box attacks on specific input images with high
success rate and low perturbation energy. Furthermore, we have also generated a
single universal perturbation matrix using the training dataset only which,
when added to unseen test images, results in forcing the trained neural network
to flip its prediction labels with high confidence at a success rate of > 84%.
We systematically analyze the relationship between perturbation energy of an
adversarial attack, its impact on morphological constructs of clinical
significance, their perceptibility by a trained pathologist and saliency maps
obtained using deep learning models. Based on our analysis, we strongly
recommend that computational pathology models be critically analyzed using the
proposed adversarial validation strategy prior to clinical adoption.
Related papers
- Transformer-Based Self-Supervised Learning for Histopathological Classification of Ischemic Stroke Clot Origin [0.0]
Identifying the thromboembolism source in ischemic stroke is crucial for treatment and secondary prevention.
This study describes a self-supervised deep learning approach in digital pathology of emboli for classifying ischemic stroke clot origin.
arXiv Detail & Related papers (2024-05-01T23:40:12Z) - Model X-ray:Detecting Backdoored Models via Decision Boundary [62.675297418960355]
Backdoor attacks pose a significant security vulnerability for deep neural networks (DNNs)
We propose Model X-ray, a novel backdoor detection approach based on the analysis of illustrated two-dimensional (2D) decision boundaries.
Our approach includes two strategies focused on the decision areas dominated by clean samples and the concentration of label distribution.
arXiv Detail & Related papers (2024-02-27T12:42:07Z) - Multitask Deep Learning for Accurate Risk Stratification and Prediction
of Next Steps for Coronary CT Angiography Patients [26.50934421749854]
We propose a multi-task deep learning model to support risk stratification and down-stream test selection.
Our model achieved an Area Under the receiver operating characteristic Curve (AUC) of 0.76 in CAD risk stratification, and 0.72 AUC in predicting downstream tests.
arXiv Detail & Related papers (2023-09-01T08:34:13Z) - TREEMENT: Interpretable Patient-Trial Matching via Personalized Dynamic
Tree-Based Memory Network [54.332862955411656]
Clinical trials are critical for drug development but often suffer from expensive and inefficient patient recruitment.
In recent years, machine learning models have been proposed for speeding up patient recruitment via automatically matching patients with clinical trials.
We introduce a dynamic tree-based memory network model named TREEMENT to provide accurate and interpretable patient trial matching.
arXiv Detail & Related papers (2023-07-19T12:35:09Z) - Trustworthy Visual Analytics in Clinical Gait Analysis: A Case Study for
Patients with Cerebral Palsy [43.55994393060723]
gaitXplorer is a visual analytics approach for the classification of CP-related gait patterns.
It integrates Grad-CAM, a well-established explainable artificial intelligence algorithm, for explanations of machine learning classifications.
arXiv Detail & Related papers (2022-08-10T09:21:28Z) - Benchmarking Heterogeneous Treatment Effect Models through the Lens of
Interpretability [82.29775890542967]
Estimating personalized effects of treatments is a complex, yet pervasive problem.
Recent developments in the machine learning literature on heterogeneous treatment effect estimation gave rise to many sophisticated, but opaque, tools.
We use post-hoc feature importance methods to identify features that influence the model's predictions.
arXiv Detail & Related papers (2022-06-16T17:59:05Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Quality control for more reliable integration of deep learning-based
image segmentation into medical workflows [0.23609258021376836]
We present an analysis of state-of-the-art automatic quality control (QC) approaches to estimate the certainty of their outputs.
We validated the most promising approaches on a brain image segmentation task identifying white matter hyperintensities (WMH) in magnetic resonance imaging data.
arXiv Detail & Related papers (2021-12-06T16:30:43Z) - Lung Cancer Lesion Detection in Histopathology Images Using Graph-Based
Sparse PCA Network [93.22587316229954]
We propose a graph-based sparse principal component analysis (GS-PCA) network, for automated detection of cancerous lesions on histological lung slides stained by hematoxylin and eosin (H&E)
We evaluate the performance of the proposed algorithm on H&E slides obtained from an SVM K-rasG12D lung cancer mouse model using precision/recall rates, F-score, Tanimoto coefficient, and area under the curve (AUC) of the receiver operator characteristic (ROC)
arXiv Detail & Related papers (2021-10-27T19:28:36Z) - Towards Evaluating the Robustness of Deep Diagnostic Models by
Adversarial Attack [38.480886577088384]
Recent studies have shown deep diagnostic models may not be robust in the inference process.
Adversarial example is a well-designed perturbation that is not easily perceived by humans.
We have designed two new defense methods to handle adversarial examples in deep diagnostic models.
arXiv Detail & Related papers (2021-03-05T02:24:47Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.