SEDA: Self-Ensembling ViT with Defensive Distillation and Adversarial
Training for robust Chest X-rays Classification
- URL: http://arxiv.org/abs/2308.07874v1
- Date: Tue, 15 Aug 2023 16:40:46 GMT
- Title: SEDA: Self-Ensembling ViT with Defensive Distillation and Adversarial
Training for robust Chest X-rays Classification
- Authors: Raza Imam, Ibrahim Almakky, Salma Alrashdi, Baketah Alrashdi, Mohammad
Yaqub
- Abstract summary: Vision Transformer (ViT) to adversarial, privacy, and confidentiality attacks raise serious concerns about their reliability in medical settings.
We propose Self-Ensembling ViT with defensive Distillation and Adversarial training (SEDA)
SEDA utilizes efficient CNN blocks to learn spatial features with various levels of abstraction from feature representations extracted from intermediate ViT blocks.
- Score: 0.8812173669205372
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Learning methods have recently seen increased adoption in medical
imaging applications. However, elevated vulnerabilities have been explored in
recent Deep Learning solutions, which can hinder future adoption. Particularly,
the vulnerability of Vision Transformer (ViT) to adversarial, privacy, and
confidentiality attacks raise serious concerns about their reliability in
medical settings. This work aims to enhance the robustness of self-ensembling
ViTs for the tuberculosis chest x-ray classification task. We propose
Self-Ensembling ViT with defensive Distillation and Adversarial training
(SEDA). SEDA utilizes efficient CNN blocks to learn spatial features with
various levels of abstraction from feature representations extracted from
intermediate ViT blocks, that are largely unaffected by adversarial
perturbations. Furthermore, SEDA leverages adversarial training in combination
with defensive distillation for improved robustness against adversaries.
Training using adversarial examples leads to better model generalizability and
improves its ability to handle perturbations. Distillation using soft
probabilities introduces uncertainty and variation into the output
probabilities, making it more difficult for adversarial and privacy attacks.
Extensive experiments performed with the proposed architecture and training
paradigm on publicly available Tuberculosis x-ray dataset shows SOTA efficacy
of SEDA compared to SEViT in terms of computational efficiency with 70x times
lighter framework and enhanced robustness of +9%.
Related papers
- Robust and Explainable Framework to Address Data Scarcity in Diagnostic Imaging [6.744847405966574]
We introduce a novel ensemble framework called Efficient Transfer and Self-supervised Learning based Ensemble Framework' (ETSEF)
ETSEF leverages features from multiple pre-trained deep learning models to efficiently learn powerful representations from a limited number of data samples.
Five independent medical imaging tasks, including endoscopy, breast cancer, monkeypox, brain tumour, and glaucoma detection, were tested to demonstrate ETSEF's effectiveness and robustness.
arXiv Detail & Related papers (2024-07-09T05:48:45Z) - The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks [90.52808174102157]
In safety-critical applications such as medical imaging and autonomous driving, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks.
A notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models.
This study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks.
arXiv Detail & Related papers (2024-05-14T18:05:19Z) - Hide in Thicket: Generating Imperceptible and Rational Adversarial
Perturbations on 3D Point Clouds [62.94859179323329]
Adrial attack methods based on point manipulation for 3D point cloud classification have revealed the fragility of 3D models.
We propose a novel shape-based adversarial attack method, HiT-ADV, which conducts a two-stage search for attack regions based on saliency and imperceptibility perturbation scores.
We propose that by employing benign resampling and benign rigid transformations, we can further enhance physical adversarial strength with little sacrifice to imperceptibility.
arXiv Detail & Related papers (2024-03-08T12:08:06Z) - On enhancing the robustness of Vision Transformers: Defensive Diffusion [0.0]
ViTs, the SOTA vision model, rely on large amounts of patient data for training.
Adversaries may exploit vulnerabilities in ViTs to extract sensitive patient information and compromising patient privacy.
This work addresses these vulnerabilities to ensure the trustworthiness and reliability of ViTs in medical applications.
arXiv Detail & Related papers (2023-05-14T00:17:33Z) - Self-Ensembling Vision Transformer (SEViT) for Robust Medical Image
Classification [4.843654097048771]
Vision Transformers (ViT) are competing to replace Convolutional Neural Networks (CNN) for various computer vision tasks in medical imaging.
Recent works have shown that ViTs are also susceptible to such attacks and suffer significant performance degradation under attack.
We propose a novel self-ensembling method to enhance the robustness of ViT in the presence of adversarial attacks.
arXiv Detail & Related papers (2022-08-04T19:02:24Z) - Enhancing Adversarial Training with Feature Separability [52.39305978984573]
We introduce a new concept of adversarial training graph (ATG) with which the proposed adversarial training with feature separability (ATFS) enables to boost the intra-class feature similarity and increase inter-class feature variance.
Through comprehensive experiments, we demonstrate that the proposed ATFS framework significantly improves both clean and robust performance.
arXiv Detail & Related papers (2022-05-02T04:04:23Z) - Exploring Adversarially Robust Training for Unsupervised Domain
Adaptation [71.94264837503135]
Unsupervised Domain Adaptation (UDA) methods aim to transfer knowledge from a labeled source domain to an unlabeled target domain.
This paper explores how to enhance the unlabeled data robustness via AT while learning domain-invariant features for UDA.
We propose a novel Adversarially Robust Training method for UDA accordingly, referred to as ARTUDA.
arXiv Detail & Related papers (2022-02-18T17:05:19Z) - Performance or Trust? Why Not Both. Deep AUC Maximization with
Self-Supervised Learning for COVID-19 Chest X-ray Classifications [72.52228843498193]
In training deep learning models, a compromise often must be made between performance and trust.
In this work, we integrate a new surrogate loss with self-supervised learning for computer-aided screening of COVID-19 patients.
arXiv Detail & Related papers (2021-12-14T21:16:52Z) - Robust Pre-Training by Adversarial Contrastive Learning [120.33706897927391]
Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness.
We improve robustness-aware self-supervised pre-training by learning representations consistent under both data augmentations and adversarial perturbations.
arXiv Detail & Related papers (2020-10-26T04:44:43Z) - Towards Robust Neural Networks via Orthogonal Diversity [30.77473391842894]
A series of methods represented by the adversarial training and its variants have proven as one of the most effective techniques in enhancing the Deep Neural Networks robustness.
This paper proposes a novel defense that aims at augmenting the model in order to learn features that are adaptive to diverse inputs, including adversarial examples.
In this way, the proposed DIO augments the model and enhances the robustness of DNN itself as the learned features can be corrected by these mutually-orthogonal paths.
arXiv Detail & Related papers (2020-10-23T06:40:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.