ACAT: Adversarial Counterfactual Attention for Classification and
Detection in Medical Imaging
- URL: http://arxiv.org/abs/2303.15421v2
- Date: Fri, 11 Aug 2023 20:25:43 GMT
- Title: ACAT: Adversarial Counterfactual Attention for Classification and
Detection in Medical Imaging
- Authors: Alessandro Fontanella, Antreas Antoniou, Wenwen Li, Joanna Wardlaw,
Grant Mair, Emanuele Trucco, Amos Storkey
- Abstract summary: We propose a framework that employs saliency maps to obtain soft spatial attention masks that modulate the image features at different scales.
ACAT increases the baseline classification accuracy of lesions in brain CT scans from 71.39% to 72.55% and of COVID-19 related findings in lung CT scans from 67.71% to 70.84%.
- Score: 41.202147558260336
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In some medical imaging tasks and other settings where only small parts of
the image are informative for the classification task, traditional CNNs can
sometimes struggle to generalise. Manually annotated Regions of Interest (ROI)
are sometimes used to isolate the most informative parts of the image. However,
these are expensive to collect and may vary significantly across annotators. To
overcome these issues, we propose a framework that employs saliency maps to
obtain soft spatial attention masks that modulate the image features at
different scales. We refer to our method as Adversarial Counterfactual
Attention (ACAT). ACAT increases the baseline classification accuracy of
lesions in brain CT scans from 71.39% to 72.55% and of COVID-19 related
findings in lung CT scans from 67.71% to 70.84% and exceeds the performance of
competing methods. We investigate the best way to generate the saliency maps
employed in our architecture and propose a way to obtain them from
adversarially generated counterfactual images. They are able to isolate the
area of interest in brain and lung CT scans without using any manual
annotations. In the task of localising the lesion location out of 6 possible
regions, they obtain a score of 65.05% on brain CT scans, improving the score
of 61.29% obtained with the best competing method.
Related papers
- Topology and Intersection-Union Constrained Loss Function for Multi-Region Anatomical Segmentation in Ocular Images [5.628938375586146]
Ocular Myasthenia Gravis (OMG) is a rare and challenging disease to detect in its early stages.
No publicly available dataset and tools currently exist for this purpose.
We propose a new topology and intersection-union constrained loss function (TIU loss) that improves performance using small training datasets.
arXiv Detail & Related papers (2024-11-01T13:17:18Z) - CTARR: A fast and robust method for identifying anatomical regions on CT images via atlas registration [0.09130220606101362]
We introduce CTARR, a novel generic method for CT Anatomical Region Recognition.
The method serves as a pre-processing step for any deep learning-based CT image analysis pipeline.
Our proposed method is based on atlas registration and provides a fast and robust way to crop any anatomical region encoded as one or multiple bounding box(es) from any unlabeled CT scan.
arXiv Detail & Related papers (2024-10-03T08:52:21Z) - Cross-modulated Few-shot Image Generation for Colorectal Tissue
Classification [58.147396879490124]
Our few-shot generation method, named XM-GAN, takes one base and a pair of reference tissue images as input and generates high-quality yet diverse images.
To the best of our knowledge, we are the first to investigate few-shot generation in colorectal tissue images.
arXiv Detail & Related papers (2023-04-04T17:50:30Z) - Generative Adversarial Networks for Weakly Supervised Generation and Evaluation of Brain Tumor Segmentations on MR Images [0.0]
This work presents a weakly supervised approach to segment anomalies in 2D magnetic resonance images.
We train a generative adversarial network (GAN) that converts cancerous images to healthy variants.
Non-cancerous variants can also be used to evaluate the segmentations in a weakly supervised fashion.
arXiv Detail & Related papers (2022-11-10T00:04:46Z) - FetReg2021: A Challenge on Placental Vessel Segmentation and
Registration in Fetoscopy [52.3219875147181]
Fetoscopic laser photocoagulation is a widely adopted procedure for treating Twin-to-Twin Transfusion Syndrome (TTTS)
The procedure is particularly challenging due to the limited field of view, poor manoeuvrability of the fetoscope, poor visibility, and variability in illumination.
Computer-assisted intervention (CAI) can provide surgeons with decision support and context awareness by identifying key structures in the scene and expanding the fetoscopic field of view through video mosaicking.
Seven teams participated in this challenge and their model performance was assessed on an unseen test dataset of 658 pixel-annotated images from 6 fet
arXiv Detail & Related papers (2022-06-24T23:44:42Z) - Classification of COVID-19 Patients with their Severity Level from Chest
CT Scans using Transfer Learning [3.667495151642095]
The rapid increment in cases of COVID-19 has led to an increase in demand for hospital beds and other medical equipment.
Keeping this in mind, we share our research in detecting COVID-19 as well as assessing its severity using chest-CT scans and Deep Learning pre-trained models.
Our model can therefore help radiologists detect COVID-19 and the extent of its severity.
arXiv Detail & Related papers (2022-05-27T06:22:09Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Investigating and Exploiting Image Resolution for Transfer
Learning-based Skin Lesion Classification [3.110738188734789]
Fine-tuning pre-trained convolutional neural networks (CNNs) has been shown to work well for skin lesion classification.
In this paper, we explore the effect of input image size on skin lesion classification performance of fine-tuned CNNs.
Our results show that using very small images (of size 64x64 pixels) degrades the classification performance, while images of size 128x128 pixels support good performance with larger image sizes leading to slightly improved classification.
arXiv Detail & Related papers (2020-06-25T21:51:24Z) - Synergistic Learning of Lung Lobe Segmentation and Hierarchical
Multi-Instance Classification for Automated Severity Assessment of COVID-19
in CT Images [61.862364277007934]
We propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images.
A multi-task deep network (called M$2$UNet) is then developed to assess the severity of COVID-19 patients.
Our M$2$UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment.
arXiv Detail & Related papers (2020-05-08T03:16:15Z) - Y-Net for Chest X-Ray Preprocessing: Simultaneous Classification of
Geometry and Segmentation of Annotations [70.0118756144807]
This work introduces a general pre-processing step for chest x-ray input into machine learning algorithms.
A modified Y-Net architecture based on the VGG11 encoder is used to simultaneously learn geometric orientation and segmentation of radiographs.
Results were evaluated by expert clinicians, with acceptable geometry in 95.8% and annotation mask in 96.2%, compared to 27.0% and 34.9% respectively in control images.
arXiv Detail & Related papers (2020-05-08T02:16:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.