DeepMTS: Deep Multi-task Learning for Survival Prediction in Patients
with Advanced Nasopharyngeal Carcinoma using Pretreatment PET/CT
- URL: http://arxiv.org/abs/2109.07711v1
- Date: Thu, 16 Sep 2021 04:12:59 GMT
- Title: DeepMTS: Deep Multi-task Learning for Survival Prediction in Patients
with Advanced Nasopharyngeal Carcinoma using Pretreatment PET/CT
- Authors: Mingyuan Meng, Bingxin Gu, Lei Bi, Shaoli Song, David Dagan Feng, and
Jinman Kim
- Abstract summary: Nasopharyngeal Carcinoma (NPC) is a worldwide malignant epithelial cancer.
Deep learning has been introduced to the survival prediction in various cancers including NPC.
In this study, we introduced the concept of multi-task leaning into deep survival models to address the overfitting problem resulted from small data.
- Score: 15.386240118882569
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Nasopharyngeal Carcinoma (NPC) is a worldwide malignant epithelial cancer.
Survival prediction is a major concern for NPC patients, as it provides early
prognostic information that is needed to guide treatments. Recently, deep
learning, which leverages Deep Neural Networks (DNNs) to learn deep
representations of image patterns, has been introduced to the survival
prediction in various cancers including NPC. It has been reported that
image-derived end-to-end deep survival models have the potential to outperform
clinical prognostic indicators and traditional radiomics-based survival models
in prognostic performance. However, deep survival models, especially 3D models,
require large image training data to avoid overfitting. Unfortunately, medical
image data is usually scarce, especially for Positron Emission
Tomography/Computed Tomography (PET/CT) due to the high cost of PET/CT
scanning. Compared to Magnetic Resonance Imaging (MRI) or Computed Tomography
(CT) providing only anatomical information of tumors, PET/CT that provides both
anatomical (from CT) and metabolic (from PET) information is promising to
achieve more accurate survival prediction. However, we have not identified any
3D end-to-end deep survival model that applies to small PET/CT data of NPC
patients. In this study, we introduced the concept of multi-task leaning into
deep survival models to address the overfitting problem resulted from small
data. Tumor segmentation was incorporated as an auxiliary task to enhance the
model's efficiency of learning from scarce PET/CT data. Based on this idea, we
proposed a 3D end-to-end Deep Multi-Task Survival model (DeepMTS) for joint
survival prediction and tumor segmentation. Our DeepMTS can jointly learn
survival prediction and tumor segmentation using PET/CT data of only 170
patients with advanced NPC.
Related papers
- Lung tumor segmentation in MRI mice scans using 3D nnU-Net with minimum annotations [0.4999814847776097]
In drug discovery, accurate lung tumor segmentation is an important step for assessing tumor size and its progression using textitin-vivo imaging such as MRI.
In this work, we focus on optimizing lung tumor segmentation in mice. First, we demonstrate that the nnU-Net model outperforms the U-Net, U-Net3+, and DeepMeta models. Most importantly, we achieve better results with nnU-Net 3D models than 2D models.
arXiv Detail & Related papers (2024-11-01T14:32:58Z) - 3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models [51.855377054763345]
This paper introduces 3D-CT-GPT, a Visual Question Answering (VQA)-based medical visual language model for generating radiology reports from 3D CT scans.
Experiments on both public and private datasets demonstrate that 3D-CT-GPT significantly outperforms existing methods in terms of report accuracy and quality.
arXiv Detail & Related papers (2024-09-28T12:31:07Z) - AutoPET Challenge: Tumour Synthesis for Data Augmentation [26.236831356731017]
We adapt the DiffTumor method, originally designed for CT images, to generate synthetic PET-CT images with lesions.
Our approach trains the generative model on the AutoPET dataset and uses it to expand the training data.
Our findings show that the model trained on the augmented dataset achieves a higher Dice score, demonstrating the potential of our data augmentation approach.
arXiv Detail & Related papers (2024-09-12T14:23:19Z) - Lung-CADex: Fully automatic Zero-Shot Detection and Classification of Lung Nodules in Thoracic CT Images [45.29301790646322]
Computer-aided diagnosis can help with early lung nodul detection and facilitate subsequent nodule characterization.
We propose CADe, for segmenting lung nodules in a zero-shot manner using a variant of the Segment Anything Model called MedSAM.
We also propose, CADx, a method for the nodule characterization as benign/malignant by making a gallery of radiomic features and aligning image-feature pairs through contrastive learning.
arXiv Detail & Related papers (2024-07-02T19:30:25Z) - Score-Based Generative Models for PET Image Reconstruction [38.72868748574543]
We propose several PET-specific adaptations of score-based generative models.
The proposed framework is developed for both 2D and 3D PET.
In addition, we provide an extension to guided reconstruction using magnetic resonance images.
arXiv Detail & Related papers (2023-08-27T19:43:43Z) - Merging-Diverging Hybrid Transformer Networks for Survival Prediction in
Head and Neck Cancer [10.994223928445589]
We propose a merging-diverging learning framework for survival prediction from multi-modality images.
This framework has a merging encoder to fuse multi-modality information and a diverging decoder to extract region-specific information.
Our framework is demonstrated on survival prediction from PET-CT images in Head and Neck (H&N) cancer.
arXiv Detail & Related papers (2023-07-07T07:16:03Z) - Exploring Vanilla U-Net for Lesion Segmentation from Whole-body
FDG-PET/CT Scans [16.93163630413171]
Since FDG-PET scans only provide metabolic information, healthy tissue or benign disease with irregular glucose consumption may be mistaken for cancer.
In this paper, we explore the potential of U-Net for lesion segmentation in whole-body FDG-PET/CT scans from three aspects, including network architecture, data preprocessing, and data augmentation.
Our method achieves first place in both preliminary and final leaderboards of the autoPET 2022 challenge.
arXiv Detail & Related papers (2022-10-14T03:37:18Z) - Breast Cancer Induced Bone Osteolysis Prediction Using Temporal
Variational Auto-Encoders [65.95959936242993]
We develop a deep learning framework that can accurately predict and visualize the progression of osteolytic bone lesions.
It will assist in planning and evaluating treatment strategies to prevent skeletal related events (SREs) in breast cancer patients.
arXiv Detail & Related papers (2022-03-20T21:00:10Z) - Predicting Distant Metastases in Soft-Tissue Sarcomas from PET-CT scans
using Constrained Hierarchical Multi-Modality Feature Learning [14.60163613315816]
Distant metastases (DM) are the leading cause of death in patients with soft-tissue sarcomas (STSs)
It is difficult to determine from imaging studies which STS patients will develop metastases.
We outline a new 3D CNN to help predict DM in patients from PET-CT data.
arXiv Detail & Related papers (2021-04-23T05:12:02Z) - M3Lung-Sys: A Deep Learning System for Multi-Class Lung Pneumonia
Screening from CT Imaging [85.00066186644466]
We propose a Multi-task Multi-slice Deep Learning System (M3Lung-Sys) for multi-class lung pneumonia screening from CT imaging.
In addition to distinguish COVID-19 from Healthy, H1N1, and CAP cases, our M 3 Lung-Sys also be able to locate the areas of relevant lesions.
arXiv Detail & Related papers (2020-10-07T06:22:24Z) - Spatio-spectral deep learning methods for in-vivo hyperspectral
laryngeal cancer detection [49.32653090178743]
Early detection of head and neck tumors is crucial for patient survival.
Hyperspectral imaging (HSI) can be used for non-invasive detection of head and neck tumors.
We present multiple deep learning techniques for in-vivo laryngeal cancer detection based on HSI.
arXiv Detail & Related papers (2020-04-21T17:07:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.