Adversarial Attacks and Defences for Skin Cancer Classification
- URL: http://arxiv.org/abs/2212.06822v1
- Date: Tue, 13 Dec 2022 18:58:21 GMT
- Title: Adversarial Attacks and Defences for Skin Cancer Classification
- Authors: Vinay Jogani, Joy Purohit, Ishaan Shivhare, Samina Attari and Shraddha
Surtkar
- Abstract summary: An increase in the usage of such systems can be observed in the healthcare industry.
It becomes increasingly important to understand the vulnerabilities in such systems.
This paper explores common adversarial attack techniques.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: There has been a concurrent significant improvement in the medical images
used to facilitate diagnosis and the performance of machine learning techniques
to perform tasks such as classification, detection, and segmentation in recent
years. As a result, a rapid increase in the usage of such systems can be
observed in the healthcare industry, for instance in the form of medical image
classification systems, where these models have achieved diagnostic parity with
human physicians. One such application where this can be observed is in
computer vision tasks such as the classification of skin lesions in
dermatoscopic images. However, as stakeholders in the healthcare industry, such
as insurance companies, continue to invest extensively in machine learning
infrastructure, it becomes increasingly important to understand the
vulnerabilities in such systems. Due to the highly critical nature of the tasks
being carried out by these machine learning models, it is necessary to analyze
techniques that could be used to take advantage of these vulnerabilities and
methods to defend against them. This paper explores common adversarial attack
techniques. The Fast Sign Gradient Method and Projected Descent Gradient are
used against a Convolutional Neural Network trained to classify dermatoscopic
images of skin lesions. Following that, it also discusses one of the most
popular adversarial defense techniques, adversarial training. The performance
of the model that has been trained on adversarial examples is then tested
against the previously mentioned attacks, and recommendations to improve neural
networks robustness are thus provided based on the results of the experiment.
Related papers
- Securing the Diagnosis of Medical Imaging: An In-depth Analysis of AI-Resistant Attacks [0.0]
It's common knowledge that attackers might cause misclassification by deliberately creating inputs for machine learning classifiers.
Recent arguments have suggested that adversarial attacks could be made against medical image analysis technologies.
It is essential to assess how strong medical DNN tasks are against adversarial attacks.
arXiv Detail & Related papers (2024-08-01T07:37:27Z) - Adversarial-Robust Transfer Learning for Medical Imaging via Domain
Assimilation [17.46080957271494]
The scarcity of publicly available medical images has led contemporary algorithms to depend on pretrained models grounded on a large set of natural images.
A significant em domain discrepancy exists between natural and medical images, which causes AI models to exhibit heightened em vulnerability to adversarial attacks.
This paper proposes a em domain assimilation approach that introduces texture and color adaptation into transfer learning, followed by a texture preservation component to suppress undesired distortion.
arXiv Detail & Related papers (2024-02-25T06:39:15Z) - Survey on Adversarial Attack and Defense for Medical Image Analysis: Methods and Challenges [64.63744409431001]
We present a comprehensive survey on advances in adversarial attacks and defenses for medical image analysis.
For a fair comparison, we establish a new benchmark for adversarially robust medical diagnosis models.
arXiv Detail & Related papers (2023-03-24T16:38:58Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Analysis of Explainable Artificial Intelligence Methods on Medical Image
Classification [0.0]
The use of deep learning in computer vision tasks such as image classification has led to a rapid increase in the performance of such systems.
Medical image classification systems are being adopted due to their high accuracy and near parity with human physicians in many tasks.
The research techniques being used to gain insight into the black-box models are in the field of explainable artificial intelligence (XAI)
arXiv Detail & Related papers (2022-12-10T06:17:43Z) - Dissecting Self-Supervised Learning Methods for Surgical Computer Vision [51.370873913181605]
Self-Supervised Learning (SSL) methods have begun to gain traction in the general computer vision community.
The effectiveness of SSL methods in more complex and impactful domains, such as medicine and surgery, remains limited and unexplored.
We present an extensive analysis of the performance of these methods on the Cholec80 dataset for two fundamental and popular tasks in surgical context understanding, phase recognition and tool presence detection.
arXiv Detail & Related papers (2022-07-01T14:17:11Z) - Incremental Cross-Domain Adaptation for Robust Retinopathy Screening via
Bayesian Deep Learning [7.535751594024775]
Retinopathy represents a group of retinal diseases that, if not treated timely, can cause severe visual impairments or even blindness.
This paper presents a novel incremental cross-domain adaptation instrument that allows any deep classification model to progressively learn abnormal retinal pathologies.
The proposed framework, evaluated on six public datasets, outperforms the state-of-the-art competitors by achieving an overall accuracy and F1 score of 0.9826 and 0.9846, respectively.
arXiv Detail & Related papers (2021-10-18T13:45:21Z) - On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy [70.71457102672545]
We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
arXiv Detail & Related papers (2021-06-25T08:32:45Z) - Recent advances and clinical applications of deep learning in medical
image analysis [7.132678647070632]
We reviewed and summarized more than 200 recently published papers to provide a comprehensive overview of applying deep learning methods in various medical image analysis tasks.
Especially, we emphasize the latest progress and contributions of state-of-the-art unsupervised and semi-supervised deep learning in medical images.
arXiv Detail & Related papers (2021-05-27T18:05:12Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Technical Challenges for Training Fair Neural Networks [62.466658247995404]
We conduct experiments on both facial recognition and automated medical diagnosis datasets using state-of-the-art architectures.
We observe that large models overfit to fairness objectives, and produce a range of unintended and undesirable consequences.
arXiv Detail & Related papers (2021-02-12T20:36:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.