Predicting Distant Metastases in Soft-Tissue Sarcomas from PET-CT scans
using Constrained Hierarchical Multi-Modality Feature Learning
- URL: http://arxiv.org/abs/2104.11416v1
- Date: Fri, 23 Apr 2021 05:12:02 GMT
- Title: Predicting Distant Metastases in Soft-Tissue Sarcomas from PET-CT scans
using Constrained Hierarchical Multi-Modality Feature Learning
- Authors: Yige Peng, Lei Bi, Ashnil Kumar, Michael Fulham, Dagan Feng, Jinman
Kim
- Abstract summary: Distant metastases (DM) are the leading cause of death in patients with soft-tissue sarcomas (STSs)
It is difficult to determine from imaging studies which STS patients will develop metastases.
We outline a new 3D CNN to help predict DM in patients from PET-CT data.
- Score: 14.60163613315816
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Distant metastases (DM) refer to the dissemination of tumors, usually, beyond
the organ where the tumor originated. They are the leading cause of death in
patients with soft-tissue sarcomas (STSs). Positron emission
tomography-computed tomography (PET-CT) is regarded as the imaging modality of
choice for the management of STSs. It is difficult to determine from imaging
studies which STS patients will develop metastases. 'Radiomics' refers to the
extraction and analysis of quantitative features from medical images and it has
been employed to help identify such tumors. The state-of-the-art in radiomics
is based on convolutional neural networks (CNNs). Most CNNs are designed for
single-modality imaging data (CT or PET alone) and do not exploit the
information embedded in PET-CT where there is a combination of an anatomical
and functional imaging modality. Furthermore, most radiomic methods rely on
manual input from imaging specialists for tumor delineation, definition and
selection of radiomic features. This approach, however, may not be scalable to
tumors with complex boundaries and where there are multiple other sites of
disease. We outline a new 3D CNN to help predict DM in STS patients from PET-CT
data. The 3D CNN uses a constrained feature learning module and a hierarchical
multi-modality feature learning module that leverages the complementary
information from the modalities to focus on semantically important regions. Our
results on a public PET-CT dataset of STS patients show that multi-modal
information improves the ability to identify those patients who develop DM.
Further our method outperformed all other related state-of-the-art methods.
Related papers
- Lung-CADex: Fully automatic Zero-Shot Detection and Classification of Lung Nodules in Thoracic CT Images [45.29301790646322]
Computer-aided diagnosis can help with early lung nodul detection and facilitate subsequent nodule characterization.
We propose CADe, for segmenting lung nodules in a zero-shot manner using a variant of the Segment Anything Model called MedSAM.
We also propose, CADx, a method for the nodule characterization as benign/malignant by making a gallery of radiomic features and aligning image-feature pairs through contrastive learning.
arXiv Detail & Related papers (2024-07-02T19:30:25Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - Post-Hoc Explainability of BI-RADS Descriptors in a Multi-task Framework
for Breast Cancer Detection and Segmentation [48.08423125835335]
MT-BI-RADS is a novel explainable deep learning approach for tumor detection in Breast Ultrasound (BUS) images.
It offers three levels of explanations to enable radiologists to comprehend the decision-making process in predicting tumor malignancy.
arXiv Detail & Related papers (2023-08-27T22:07:42Z) - CancerUniT: Towards a Single Unified Model for Effective Detection,
Segmentation, and Diagnosis of Eight Major Cancers Using a Large Collection
of CT Scans [45.83431075462771]
Human readers or radiologists routinely perform full-body multi-organ multi-disease detection and diagnosis in clinical practice.
Most medical AI systems are built to focus on single organs with a narrow list of a few diseases.
CancerUniT is a query-based Mask Transformer model with the output of multi-tumor prediction.
arXiv Detail & Related papers (2023-01-28T20:09:34Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - DeepMTS: Deep Multi-task Learning for Survival Prediction in Patients
with Advanced Nasopharyngeal Carcinoma using Pretreatment PET/CT [15.386240118882569]
Nasopharyngeal Carcinoma (NPC) is a worldwide malignant epithelial cancer.
Deep learning has been introduced to the survival prediction in various cancers including NPC.
In this study, we introduced the concept of multi-task leaning into deep survival models to address the overfitting problem resulted from small data.
arXiv Detail & Related papers (2021-09-16T04:12:59Z) - Learned super resolution ultrasound for improved breast lesion
characterization [52.77024349608834]
Super resolution ultrasound localization microscopy enables imaging of the microvasculature at the capillary level.
In this work we use a deep neural network architecture that makes effective use of signal structure to address these challenges.
By leveraging our trained network, the microvasculature structure is recovered in a short time, without prior PSF knowledge, and without requiring separability of the UCAs.
arXiv Detail & Related papers (2021-07-12T09:04:20Z) - Spatio-Temporal Dual-Stream Neural Network for Sequential Whole-Body PET
Segmentation [10.344707825773252]
We propose a 'dual-stream' neural network (ST-DSNN) to segment sequential whole-body PET scans.
Our ST-DSNN learns and accumulates image features from the PET images done over time.
Our results show that our method outperforms the state-of-the-art PET image segmentation methods.
arXiv Detail & Related papers (2021-06-09T10:15:20Z) - Multimodal Spatial Attention Module for Targeting Multimodal PET-CT Lung
Tumor Segmentation [11.622615048002567]
Multimodal spatial attention module (MSAM) learns to emphasize regions related to tumors.
MSAM can be applied to common backbone architectures and trained end-to-end.
arXiv Detail & Related papers (2020-07-29T10:27:22Z) - Multi-Modality Information Fusion for Radiomics-based Neural
Architecture Search [10.994223928445589]
Existing radiomics methods require the design of hand-crafted radiomic features and their extraction and selection.
Recent radiomics methods, based on convolutional neural networks (CNNs), also require manual input in network architecture design.
We propose a multi-modality neural architecture search method (MM-NAS) to automatically derive optimal multi-modality image features for radiomics.
arXiv Detail & Related papers (2020-07-12T14:35:13Z) - Experimenting with Convolutional Neural Network Architectures for the
automatic characterization of Solitary Pulmonary Nodules' malignancy rating [0.0]
Early and automatic diagnosis of Solitary Pulmonary Nodules (SPN) in Computer Tomography (CT) chest scans can provide early treatment as well as doctor liberation from time-consuming procedures.
In this study, we consider the problem of diagnostic classification between benign and malignant lung nodules in CT images derived from a PET/CT scanner.
More specifically, we intend to develop experimental Convolutional Neural Network (CNN) architectures and conduct experiments, by tuning their parameters, to investigate their behavior, and to define the optimal setup for the accurate classification.
arXiv Detail & Related papers (2020-03-15T11:46:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.