ACAT: Adversarial Counterfactual Attention for Classification and
Detection in Medical Imaging
- URL: http://arxiv.org/abs/2303.15421v2
- Date: Fri, 11 Aug 2023 20:25:43 GMT
- Title: ACAT: Adversarial Counterfactual Attention for Classification and
Detection in Medical Imaging
- Authors: Alessandro Fontanella, Antreas Antoniou, Wenwen Li, Joanna Wardlaw,
Grant Mair, Emanuele Trucco, Amos Storkey
- Abstract summary: We propose a framework that employs saliency maps to obtain soft spatial attention masks that modulate the image features at different scales.
ACAT increases the baseline classification accuracy of lesions in brain CT scans from 71.39% to 72.55% and of COVID-19 related findings in lung CT scans from 67.71% to 70.84%.
- Score: 41.202147558260336
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In some medical imaging tasks and other settings where only small parts of
the image are informative for the classification task, traditional CNNs can
sometimes struggle to generalise. Manually annotated Regions of Interest (ROI)
are sometimes used to isolate the most informative parts of the image. However,
these are expensive to collect and may vary significantly across annotators. To
overcome these issues, we propose a framework that employs saliency maps to
obtain soft spatial attention masks that modulate the image features at
different scales. We refer to our method as Adversarial Counterfactual
Attention (ACAT). ACAT increases the baseline classification accuracy of
lesions in brain CT scans from 71.39% to 72.55% and of COVID-19 related
findings in lung CT scans from 67.71% to 70.84% and exceeds the performance of
competing methods. We investigate the best way to generate the saliency maps
employed in our architecture and propose a way to obtain them from
adversarially generated counterfactual images. They are able to isolate the
area of interest in brain and lung CT scans without using any manual
annotations. In the task of localising the lesion location out of 6 possible
regions, they obtain a score of 65.05% on brain CT scans, improving the score
of 61.29% obtained with the best competing method.
Related papers
- Integrated Image and Location Analysis for Wound Classification: A Deep
Learning Approach [3.5427949413406563]
The global burden of acute and chronic wounds presents a compelling case for enhancing wound classification methods.
We introduce an innovative multi-modal network based on a deep convolutional neural network for categorizing wounds into four categories: diabetic, pressure, surgical, and venous ulcers.
A unique aspect of our methodology is incorporating a body map system that facilitates accurate wound location tagging.
arXiv Detail & Related papers (2023-08-23T02:49:22Z) - Diffusion Models for Counterfactual Generation and Anomaly Detection in
Brain Images [59.85702949046042]
We present a weakly supervised method to generate a healthy version of a diseased image and then use it to obtain a pixel-wise anomaly map.
We employ a diffusion model trained on healthy samples and combine Denoising Diffusion Probabilistic Model (DDPM) and Denoising Implicit Model (DDIM) at each step of the sampling process.
We verify that when our method is applied to healthy samples, the input images are reconstructed without significant modifications.
arXiv Detail & Related papers (2023-08-03T21:56:50Z) - Cross-modulated Few-shot Image Generation for Colorectal Tissue
Classification [58.147396879490124]
Our few-shot generation method, named XM-GAN, takes one base and a pair of reference tissue images as input and generates high-quality yet diverse images.
To the best of our knowledge, we are the first to investigate few-shot generation in colorectal tissue images.
arXiv Detail & Related papers (2023-04-04T17:50:30Z) - Generative Adversarial Networks for Weakly Supervised Generation and
Evaluation of Brain Tumor Segmentations on MR Images [0.0]
This work presents a weakly supervised approach to segment anomalies in 2D magnetic resonance images.
We train a generative adversarial network (GAN) that converts cancerous images to healthy variants.
Non-cancerous variants can also be used to evaluate the segmentations in a weakly supervised fashion.
arXiv Detail & Related papers (2022-11-10T00:04:46Z) - Classification of COVID-19 Patients with their Severity Level from Chest
CT Scans using Transfer Learning [3.667495151642095]
The rapid increment in cases of COVID-19 has led to an increase in demand for hospital beds and other medical equipment.
Keeping this in mind, we share our research in detecting COVID-19 as well as assessing its severity using chest-CT scans and Deep Learning pre-trained models.
Our model can therefore help radiologists detect COVID-19 and the extent of its severity.
arXiv Detail & Related papers (2022-05-27T06:22:09Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Investigating and Exploiting Image Resolution for Transfer
Learning-based Skin Lesion Classification [3.110738188734789]
Fine-tuning pre-trained convolutional neural networks (CNNs) has been shown to work well for skin lesion classification.
In this paper, we explore the effect of input image size on skin lesion classification performance of fine-tuned CNNs.
Our results show that using very small images (of size 64x64 pixels) degrades the classification performance, while images of size 128x128 pixels support good performance with larger image sizes leading to slightly improved classification.
arXiv Detail & Related papers (2020-06-25T21:51:24Z) - Synergistic Learning of Lung Lobe Segmentation and Hierarchical
Multi-Instance Classification for Automated Severity Assessment of COVID-19
in CT Images [61.862364277007934]
We propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images.
A multi-task deep network (called M$2$UNet) is then developed to assess the severity of COVID-19 patients.
Our M$2$UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment.
arXiv Detail & Related papers (2020-05-08T03:16:15Z) - Y-Net for Chest X-Ray Preprocessing: Simultaneous Classification of
Geometry and Segmentation of Annotations [70.0118756144807]
This work introduces a general pre-processing step for chest x-ray input into machine learning algorithms.
A modified Y-Net architecture based on the VGG11 encoder is used to simultaneously learn geometric orientation and segmentation of radiographs.
Results were evaluated by expert clinicians, with acceptable geometry in 95.8% and annotation mask in 96.2%, compared to 27.0% and 34.9% respectively in control images.
arXiv Detail & Related papers (2020-05-08T02:16:17Z) - Spinal Metastases Segmentation in MR Imaging using Deep Convolutional
Neural Networks [0.0]
This study's objective was to segment spinal metastases in diagnostic MR images using a deep learning-based approach.
We used a U-Net like architecture trained with 40 clinical cases including both, lytic and sclerotic lesion types and various MR sequences.
Compared to expertly annotated lesion segmentations, the experiments yielded promising results with average Dice scores up to 77.6% and mean sensitivity rates up to 78.9%.
arXiv Detail & Related papers (2020-01-08T10:59:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.