Weakly Supervised PET Tumor Detection Using Class Response
- URL: http://arxiv.org/abs/2003.08337v2
- Date: Thu, 19 Mar 2020 08:01:06 GMT
- Title: Weakly Supervised PET Tumor Detection Using Class Response
- Authors: Amine Amyar, Romain Modzelewski, Pierre Vera, Vincent Morard, and Su
Ruan
- Abstract summary: We present a novel approach to locate different type of lesions in positron emission tomography (PET) images using only a class label at the image-level.
The advantage of our proposed method consists of detecting the whole tumor volume in 3D images, using only two 2D images of PET image, and showing a very promising results.
- Score: 3.947298454012977
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the most challenges in medical imaging is the lack of data and
annotated data. It is proven that classical segmentation methods such as U-NET
are useful but still limited due to the lack of annotated data. Using a weakly
supervised learning is a promising way to address this problem, however, it is
challenging to train one model to detect and locate efficiently different type
of lesions due to the huge variation in images. In this paper, we present a
novel approach to locate different type of lesions in positron emission
tomography (PET) images using only a class label at the image-level. First, a
simple convolutional neural network classifier is trained to predict the type
of cancer on two 2D MIP images. Then, a pseudo-localization of the tumor is
generated using class activation maps, back-propagated and corrected in a
multitask learning approach with prior knowledge, resulting in a tumor
detection mask. Finally, we use the mask generated from the two 2D images to
detect the tumor in the 3D image. The advantage of our proposed method consists
of detecting the whole tumor volume in 3D images, using only two 2D images of
PET image, and showing a very promising results. It can be used as a tool to
locate very efficiently tumors in a PET scan, which is a time-consuming task
for physicians. In addition, we show that our proposed method can be used to
conduct a radiomics study with state of the art results.
Related papers
- Unsupervised Tumor-Aware Distillation for Multi-Modal Brain Image Translation [8.380597715285237]
Unsupervised multi-modal brain image translation has been extensively studied.
Existing methods suffer from the problem of brain tumor deformation during translation.
We propose an unsupervised tumor-aware distillation teacher-student network called UTAD-Net.
arXiv Detail & Related papers (2024-03-29T13:35:37Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Multi-View Vertebra Localization and Identification from CT Images [57.56509107412658]
We propose a multi-view vertebra localization and identification from CT images.
We convert the 3D problem into a 2D localization and identification task on different views.
Our method can learn the multi-view global information naturally.
arXiv Detail & Related papers (2023-07-24T14:43:07Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - A Novel Framework for Brain Tumor Detection Based on Convolutional
Variational Generative Models [6.726255259929498]
This paper introduces a novel framework for brain tumor detection and classification.
The proposed framework acquires an overall detection accuracy of 96.88%.
It highlights the promise of the proposed framework as an accurate low-overhead brain tumor detection system.
arXiv Detail & Related papers (2022-02-20T16:14:01Z) - Triplet Contrastive Learning for Brain Tumor Classification [99.07846518148494]
We present a novel approach of directly learning deep embeddings for brain tumor types, which can be used for downstream tasks such as classification.
We evaluate our method on an extensive brain tumor dataset which consists of 27 different tumor classes, out of which 13 are defined as rare.
arXiv Detail & Related papers (2021-08-08T11:26:34Z) - Deep Learning models for benign and malign Ocular Tumor Growth
Estimation [3.1558405181807574]
Clinicians often face issues in selecting suitable image processing algorithm for medical imaging data.
A strategy for the selection of a proper model is presented here.
arXiv Detail & Related papers (2021-07-09T05:40:25Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - SAG-GAN: Semi-Supervised Attention-Guided GANs for Data Augmentation on
Medical Images [47.35184075381965]
We present a data augmentation method for generating synthetic medical images using cycle-consistency Generative Adversarial Networks (GANs)
The proposed GANs-based model can generate a tumor image from a normal image, and in turn, it can also generate a normal image from a tumor image.
We train the classification model using real images with classic data augmentation methods and classification models using synthetic images.
arXiv Detail & Related papers (2020-11-15T14:01:24Z) - RADIOGAN: Deep Convolutional Conditional Generative adversarial Network
To Generate PET Images [3.947298454012977]
We propose a deep convolutional conditional generative adversarial network to generate MIP positron emission tomography image (PET)
The advantage of our proposed method consists of one model that is capable of generating different classes of lesions trained on a small sample size for each class of lesion.
In addition, we show that a walk through a latent space can be used as a tool to evaluate the images generated.
arXiv Detail & Related papers (2020-03-19T10:14:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.