MIRST-DM: Multi-Instance RST with Drop-Max Layer for Robust
Classification of Breast Cancer
- URL: http://arxiv.org/abs/2205.01674v1
- Date: Mon, 2 May 2022 20:25:26 GMT
- Title: MIRST-DM: Multi-Instance RST with Drop-Max Layer for Robust
Classification of Breast Cancer
- Authors: Shoukun Sun, Min Xian, Aleksandar Vakanski, Hossny Ghanem
- Abstract summary: We propose the Multi-instance RST with a drop-max layer, namely MIRST-DM, to learn smoother decision boundaries on small datasets.
The proposed approach was validated using a small breast ultrasound dataset with 1,190 images.
- Score: 62.997667081978825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robust self-training (RST) can augment the adversarial robustness of image
classification models without significantly sacrificing models'
generalizability. However, RST and other state-of-the-art defense approaches
failed to preserve the generalizability and reproduce their good adversarial
robustness on small medical image sets. In this work, we propose the
Multi-instance RST with a drop-max layer, namely MIRST-DM, which involves a
sequence of iteratively generated adversarial instances during training to
learn smoother decision boundaries on small datasets. The proposed drop-max
layer eliminates unstable features and helps learn representations that are
robust to image perturbations. The proposed approach was validated using a
small breast ultrasound dataset with 1,190 images. The results demonstrate that
the proposed approach achieves state-of-the-art adversarial robustness against
three prevalent attacks.
Related papers
- Efficient One-Step Diffusion Refinement for Snapshot Compressive Imaging [8.819370643243012]
Coded Aperture Snapshot Spectral Imaging (CASSI) is a crucial technique for capturing three-dimensional multispectral images (MSIs)
Current state-of-the-art methods, predominantly end-to-end, face limitations in reconstructing high-frequency details.
This paper introduces a novel one-step Diffusion Probabilistic Model within a self-supervised adaptation framework for Snapshot Compressive Imaging.
arXiv Detail & Related papers (2024-09-11T17:02:10Z) - Unleashing the Power of Generic Segmentation Models: A Simple Baseline for Infrared Small Target Detection [57.666055329221194]
We investigate the adaptation of generic segmentation models, such as the Segment Anything Model (SAM), to infrared small object detection tasks.
Our model demonstrates significantly improved performance in both accuracy and throughput compared to existing approaches.
arXiv Detail & Related papers (2024-09-07T05:31:24Z) - Inter-slice Super-resolution of Magnetic Resonance Images by Pre-training and Self-supervised Fine-tuning [49.197385954021456]
In clinical practice, 2D magnetic resonance (MR) sequences are widely adopted. While individual 2D slices can be stacked to form a 3D volume, the relatively large slice spacing can pose challenges for visualization and subsequent analysis tasks.
To reduce slice spacing, deep-learning-based super-resolution techniques are widely investigated.
Most current solutions require a substantial number of paired high-resolution and low-resolution images for supervised training, which are typically unavailable in real-world scenarios.
arXiv Detail & Related papers (2024-06-10T02:20:26Z) - Bi-level Guided Diffusion Models for Zero-Shot Medical Imaging Inverse Problems [4.82425721275731]
Inverse problems aim to infer high-quality images from incomplete, noisy measurements.
The Diffusion Models have recently emerged as a promising approach to such practical challenges.
A central challenge in this approach is how to guide an unconditional prediction to conform to the measurement information.
We propose underlinetextbfBi-level underlineGuided underlineDiffusion underlineModels (BGDM)
arXiv Detail & Related papers (2024-04-04T10:36:56Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Minimum Noticeable Difference based Adversarial Privacy Preserving Image
Generation [44.2692621807947]
We develop a framework to generate adversarial privacy preserving images that have minimum perceptual difference from the clean ones but are able to attack deep learning models.
To the best of our knowledge, this is the first work on exploring quality-preserving adversarial image generation based on the MND concept for privacy preserving.
arXiv Detail & Related papers (2022-06-17T09:02:12Z) - Towards Unbiased COVID-19 Lesion Localisation and Segmentation via
Weakly Supervised Learning [66.36706284671291]
We propose a data-driven framework supervised by only image-level labels to support unbiased lesion localisation.
The framework can explicitly separate potential lesions from original images, with the help of a generative adversarial network and a lesion-specific decoder.
arXiv Detail & Related papers (2021-03-01T06:05:49Z) - Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies
on Medical Image Classification [63.44396343014749]
We propose a new margin-based surrogate loss function for the AUC score.
It is more robust than the commonly used.
square loss while enjoying the same advantage in terms of large-scale optimization.
To the best of our knowledge, this is the first work that makes DAM succeed on large-scale medical image datasets.
arXiv Detail & Related papers (2020-12-06T03:41:51Z) - Whole Slide Images based Cancer Survival Prediction using Attention
Guided Deep Multiple Instance Learning Networks [38.39901070720532]
Current image-based survival models that limit to key patches or clusters derived from Whole Slide Images (WSIs)
We propose Deep Attention Multiple Instance Survival Learning (DeepAttnMISL) by introducing both siamese MI-FCN and attention-based MIL pooling.
We evaluated our methods on two large cancer whole slide images datasets and our results suggest that the proposed approach is more effective and suitable for large datasets.
arXiv Detail & Related papers (2020-09-23T14:31:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.