Multi-Task Multi-Scale Learning For Outcome Prediction in 3D PET Images
- URL: http://arxiv.org/abs/2203.00641v1
- Date: Tue, 1 Mar 2022 17:30:28 GMT
- Title: Multi-Task Multi-Scale Learning For Outcome Prediction in 3D PET Images
- Authors: Amine Amyar, Romain Modzelewski, Pierre Vera, Vincent Morard, Su Ruan
- Abstract summary: We propose a multi-task learning framework to predict patient's survival and response.
Our model was tested and validated for treatment response and survival in lung and esophageal cancers.
- Score: 4.234843176066354
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Background and Objectives: Predicting patient response to treatment and
survival in oncology is a prominent way towards precision medicine. To that
end, radiomics was proposed as a field of study where images are used instead
of invasive methods. The first step in radiomic analysis is the segmentation of
the lesion. However, this task is time consuming and can be physician
subjective. Automated tools based on supervised deep learning have made great
progress to assist physicians. However, they are data hungry, and annotated
data remains a major issue in the medical field where only a small subset of
annotated images is available. Methods: In this work, we propose a multi-task
learning framework to predict patient's survival and response. We show that the
encoder can leverage multiple tasks to extract meaningful and powerful features
that improve radiomics performance. We show also that subsidiary tasks serve as
an inductive bias so that the model can better generalize. Results: Our model
was tested and validated for treatment response and survival in lung and
esophageal cancers, with an area under the ROC curve of 77% and 71%
respectively, outperforming single task learning methods. Conclusions: We show
that, by using a multi-task learning approach, we can boost the performance of
radiomic analysis by extracting rich information of intratumoral and
peritumoral regions.
Related papers
- Self-Supervised Pre-Training with Contrastive and Masked Autoencoder
Methods for Dealing with Small Datasets in Deep Learning for Medical Imaging [8.34398674359296]
Deep learning in medical imaging has the potential to minimize the risk of diagnostic errors, reduce radiologist workload, and accelerate diagnosis.
Training such deep learning models requires large and accurate datasets, with annotations for all training samples.
To address this challenge, deep learning models can be pre-trained on large image datasets without annotations using methods from the field of self-supervised learning.
arXiv Detail & Related papers (2023-08-12T11:31:01Z) - XrayGPT: Chest Radiographs Summarization using Medical Vision-Language
Models [60.437091462613544]
We introduce XrayGPT, a novel conversational medical vision-language model.
It can analyze and answer open-ended questions about chest radiographs.
We generate 217k interactive and high-quality summaries from free-text radiology reports.
arXiv Detail & Related papers (2023-06-13T17:59:59Z) - MLC at HECKTOR 2022: The Effect and Importance of Training Data when
Analyzing Cases of Head and Neck Tumors using Machine Learning [0.9166327220922845]
This paper presents the work done by team MLC for the 2022 version of the HECKTOR grand challenge held at MICCAI 2022.
Analysis of Positron Emission Tomography (PET) and Computed Tomography (CT) images has been proposed to identify patients with a prognosis.
arXiv Detail & Related papers (2022-11-30T09:04:27Z) - Improving Radiology Summarization with Radiograph and Anatomy Prompts [60.30659124918211]
We propose a novel anatomy-enhanced multimodal model to promote impression generation.
In detail, we first construct a set of rules to extract anatomies and put these prompts into each sentence to highlight anatomy characteristics.
We utilize a contrastive learning module to align these two representations at the overall level and use a co-attention to fuse them at the sentence level.
arXiv Detail & Related papers (2022-10-15T14:05:03Z) - Metastatic Cancer Outcome Prediction with Injective Multiple Instance
Pooling [1.0965065178451103]
We process two public datasets to set up a benchmark cohort of 341 patient in total for studying outcome prediction of metastatic cancer.
We propose two injective multiple instance pooling functions that are better suited to outcome prediction.
Our results show that multiple instance learning with injective pooling functions can achieve state-of-the-art performance in the non-small-cell lung cancer CT and head and neck CT outcome prediction benchmarking tasks.
arXiv Detail & Related papers (2022-03-09T16:58:03Z) - A Deep Learning Approach to Predicting Collateral Flow in Stroke
Patients Using Radiomic Features from Perfusion Images [58.17507437526425]
Collateral circulation results from specialized anastomotic channels which provide oxygenated blood to regions with compromised blood flow.
The actual grading is mostly done through manual inspection of the acquired images.
We present a deep learning approach to predicting collateral flow grading in stroke patients based on radiomic features extracted from MR perfusion data.
arXiv Detail & Related papers (2021-10-24T18:58:40Z) - ProCAN: Progressive Growing Channel Attentive Non-Local Network for Lung
Nodule Classification [0.0]
Lung cancer classification in screening computed tomography (CT) scans is one of the most crucial tasks for early detection of this disease.
Several deep learning based models have been proposed recently to classify lung nodules as malignant or benign.
We propose a new Progressive Growing Channel Attentive Non-Local (ProCAN) network for lung nodule classification.
arXiv Detail & Related papers (2020-10-29T08:42:11Z) - BiteNet: Bidirectional Temporal Encoder Network to Predict Medical
Outcomes [53.163089893876645]
We propose a novel self-attention mechanism that captures the contextual dependency and temporal relationships within a patient's healthcare journey.
An end-to-end bidirectional temporal encoder network (BiteNet) then learns representations of the patient's journeys.
We have evaluated the effectiveness of our methods on two supervised prediction and two unsupervised clustering tasks with a real-world EHR dataset.
arXiv Detail & Related papers (2020-09-24T00:42:36Z) - An encoder-decoder-based method for COVID-19 lung infection segmentation [3.561478746634639]
This paper proposes a multi-task deep-learning-based method for lung infection segmentation using CT-scan images.
The proposed method can segment lung infections with a high degree performance even with shortage of data and labeled images.
arXiv Detail & Related papers (2020-07-02T04:02:03Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z) - Confident Coreset for Active Learning in Medical Image Analysis [57.436224561482966]
We propose a novel active learning method, confident coreset, which considers both uncertainty and distribution for effectively selecting informative samples.
By comparative experiments on two medical image analysis tasks, we show that our method outperforms other active learning methods.
arXiv Detail & Related papers (2020-04-05T13:46:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.