Head and Neck Tumor Segmentation from [18F]F-FDG PET/CT Images Based on
3D Diffusion Model
- URL: http://arxiv.org/abs/2401.17593v1
- Date: Wed, 31 Jan 2024 04:34:31 GMT
- Title: Head and Neck Tumor Segmentation from [18F]F-FDG PET/CT Images Based on
3D Diffusion Model
- Authors: Yafei Dong and Kuang Gong
- Abstract summary: Head and neck (H&N) cancers are among the most prevalent types of cancer worldwide.
Recently, the diffusion model has demonstrated remarkable performance in various image-generation tasks.
- Score: 2.895809495677426
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Head and neck (H&N) cancers are among the most prevalent types of cancer
worldwide, and [18F]F-FDG PET/CT is widely used for H&N cancer management.
Recently, the diffusion model has demonstrated remarkable performance in
various image-generation tasks. In this work, we proposed a 3D diffusion model
to accurately perform H&N tumor segmentation from 3D PET and CT volumes. The 3D
diffusion model was developed considering the 3D nature of PET and CT images
acquired. During the reverse process, the model utilized a 3D UNet structure
and took the concatenation of PET, CT, and Gaussian noise volumes as the
network input to generate the tumor mask. Experiments based on the HECKTOR
challenge dataset were conducted to evaluate the effectiveness of the proposed
diffusion model. Several state-of-the-art techniques based on U-Net and
Transformer structures were adopted as the reference methods. Benefits of
employing both PET and CT as the network input as well as further extending the
diffusion model from 2D to 3D were investigated based on various quantitative
metrics and the uncertainty maps generated. Results showed that the proposed 3D
diffusion model could generate more accurate segmentation results compared with
other methods. Compared to the diffusion model in 2D format, the proposed 3D
model yielded superior results. Our experiments also highlighted the advantage
of utilizing dual-modality PET and CT data over only single-modality data for
H&N tumor segmentation.
Related papers
- Diff3Dformer: Leveraging Slice Sequence Diffusion for Enhanced 3D CT Classification with Transformer Networks [5.806035963947936]
We propose a Diffusion-based 3D Vision Transformer (Diff3Dformer) to aggregate repetitive information within 3D CT scans.
Our method exhibits improved performance on two different scales of small datasets of 3D lung CT scans.
arXiv Detail & Related papers (2024-06-24T23:23:18Z) - 2.5D Multi-view Averaging Diffusion Model for 3D Medical Image Translation: Application to Low-count PET Reconstruction with CT-less Attenuation Correction [17.897681480967087]
Positron Emission Tomography (PET) is an important clinical imaging tool but inevitably introduces radiation hazards to patients and healthcare providers.
It is desirable to develop 3D methods to translate the non-attenuation-corrected low-dose PET into attenuation-corrected standard-dose PET.
Recent diffusion models have emerged as a new state-of-the-art deep learning method for image-to-image translation, better than traditional CNN-based methods.
We developed a novel 2.5D Multi-view Averaging Diffusion Model (MADM) for 3D image-to-image translation with application on NAC
arXiv Detail & Related papers (2024-06-12T16:22:41Z) - DiffHPE: Robust, Coherent 3D Human Pose Lifting with Diffusion [54.0238087499699]
We show that diffusion models enhance the accuracy, robustness, and coherence of human pose estimations.
We introduce DiffHPE, a novel strategy for harnessing diffusion models in 3D-HPE.
Our findings indicate that while standalone diffusion models provide commendable performance, their accuracy is even better in combination with supervised models.
arXiv Detail & Related papers (2023-09-04T12:54:10Z) - Improving 3D Imaging with Pre-Trained Perpendicular 2D Diffusion Models [52.529394863331326]
We propose a novel approach using two perpendicular pre-trained 2D diffusion models to solve the 3D inverse problem.
Our method is highly effective for 3D medical image reconstruction tasks, including MRI Z-axis super-resolution, compressed sensing MRI, and sparse-view CT.
arXiv Detail & Related papers (2023-03-15T08:28:06Z) - Unsupervised Contrastive Learning based Transformer for Lung Nodule
Detection [6.693379403133435]
Early detection of lung nodules with computed tomography (CT) is critical for the longer survival of lung cancer patients and better quality of life.
Computer-aided detection/diagnosis (CAD) is proven valuable as a second or concurrent reader in this context.
accurate detection of lung nodules remains a challenge for such CAD systems and even radiologists due to variability in size, location, and appearance of lung nodules.
Motivated by recent computer vision techniques, here we present a self-supervised region-based 3D transformer model to identify lung nodules.
arXiv Detail & Related papers (2022-04-30T01:19:00Z) - Explainable multiple abnormality classification of chest CT volumes with
AxialNet and HiResCAM [89.2175350956813]
We introduce the challenging new task of explainable multiple abnormality classification in volumetric medical images.
We propose a multiple instance learning convolutional neural network, AxialNet, that allows identification of top slices for each abnormality.
We then aim to improve the model's learning through a novel mask loss that leverages HiResCAM and 3D allowed regions.
arXiv Detail & Related papers (2021-11-24T01:14:33Z) - Evidential segmentation of 3D PET/CT images [20.65495780362289]
A segmentation method based on belief functions is proposed to segment lymphomas in 3D PET/CT images.
The architecture is composed of a feature extraction module and an evidential segmentation (ES) module.
The method was evaluated on a database of 173 patients with diffuse large b-cell lymphoma.
arXiv Detail & Related papers (2021-04-27T16:06:27Z) - Automated Model Design and Benchmarking of 3D Deep Learning Models for
COVID-19 Detection with Chest CT Scans [72.04652116817238]
We propose a differentiable neural architecture search (DNAS) framework to automatically search for the 3D DL models for 3D chest CT scans classification.
We also exploit the Class Activation Mapping (CAM) technique on our models to provide the interpretability of the results.
arXiv Detail & Related papers (2021-01-14T03:45:01Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Modelling the Distribution of 3D Brain MRI using a 2D Slice VAE [66.63629641650572]
We propose a method to model 3D MR brain volumes distribution by combining a 2D slice VAE with a Gaussian model that captures the relationships between slices.
We also introduce a novel evaluation method for generated volumes that quantifies how well their segmentations match those of true brain anatomy.
arXiv Detail & Related papers (2020-07-09T13:23:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.