Multi-Modality Information Fusion for Radiomics-based Neural
Architecture Search
- URL: http://arxiv.org/abs/2007.06002v1
- Date: Sun, 12 Jul 2020 14:35:13 GMT
- Title: Multi-Modality Information Fusion for Radiomics-based Neural
Architecture Search
- Authors: Yige Peng, Lei Bi, Michael Fulham, Dagan Feng, and Jinman Kim
- Abstract summary: Existing radiomics methods require the design of hand-crafted radiomic features and their extraction and selection.
Recent radiomics methods, based on convolutional neural networks (CNNs), also require manual input in network architecture design.
We propose a multi-modality neural architecture search method (MM-NAS) to automatically derive optimal multi-modality image features for radiomics.
- Score: 10.994223928445589
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 'Radiomics' is a method that extracts mineable quantitative features from
radiographic images. These features can then be used to determine prognosis,
for example, predicting the development of distant metastases (DM). Existing
radiomics methods, however, require complex manual effort including the design
of hand-crafted radiomic features and their extraction and selection. Recent
radiomics methods, based on convolutional neural networks (CNNs), also require
manual input in network architecture design and hyper-parameter tuning.
Radiomic complexity is further compounded when there are multiple imaging
modalities, for example, combined positron emission tomography - computed
tomography (PET-CT) where there is functional information from PET and
complementary anatomical localization information from computed tomography
(CT). Existing multi-modality radiomics methods manually fuse the data that are
extracted separately. Reliance on manual fusion often results in sub-optimal
fusion because they are dependent on an 'expert's' understanding of medical
images. In this study, we propose a multi-modality neural architecture search
method (MM-NAS) to automatically derive optimal multi-modality image features
for radiomics and thus negate the dependence on a manual process. We evaluated
our MM-NAS on the ability to predict DM using a public PET-CT dataset of
patients with soft-tissue sarcomas (STSs). Our results show that our MM-NAS had
a higher prediction accuracy when compared to state-of-the-art radiomics
methods.
Related papers
- Unifying Subsampling Pattern Variations for Compressed Sensing MRI with Neural Operators [72.79532467687427]
Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled and compressed measurements.
Deep neural networks have shown great potential for reconstructing high-quality images from highly undersampled measurements.
We propose a unified model that is robust to different subsampling patterns and image resolutions in CS-MRI.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Leveraging Multimodal CycleGAN for the Generation of Anatomically Accurate Synthetic CT Scans from MRIs [1.779948689352186]
We analyse the capabilities of different configurations of Deep Learning models to generate synthetic CT scans from MRI.
Several CycleGAN models were trained unsupervised to generate CT scans from different MRI modalities with and without contrast agents.
The results show how, depending on the input modalities, the models can have very different performances.
arXiv Detail & Related papers (2024-07-15T16:38:59Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - Disentangled Multimodal Brain MR Image Translation via Transformer-based
Modality Infuser [12.402947207350394]
We propose a transformer-based modality infuser designed to synthesize multimodal brain MR images.
In our method, we extract modality-agnostic features from the encoder and then transform them into modality-specific features.
We carried out experiments on the BraTS 2018 dataset, translating between four MR modalities.
arXiv Detail & Related papers (2024-02-01T06:34:35Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Enhanced Synthetic MRI Generation from CT Scans Using CycleGAN with
Feature Extraction [3.2088888904556123]
We propose an approach for enhanced monomodal registration using synthetic MRI images from CT scans.
Our methodology shows promising results, outperforming several state-of-the-art methods.
arXiv Detail & Related papers (2023-10-31T16:39:56Z) - Radiology-Llama2: Best-in-Class Large Language Model for Radiology [71.27700230067168]
This paper introduces Radiology-Llama2, a large language model specialized for radiology through a process known as instruction tuning.
Quantitative evaluations using ROUGE metrics on the MIMIC-CXR and OpenI datasets demonstrate that Radiology-Llama2 achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-08-29T17:44:28Z) - Optimizing Sampling Patterns for Compressed Sensing MRI with Diffusion
Generative Models [75.52575380824051]
We present a learning method to optimize sub-sampling patterns for compressed sensing multi-coil MRI.
We use a single-step reconstruction based on the posterior mean estimate given by the diffusion model and the MRI measurement process.
Our method requires as few as five training images to learn effective sampling patterns.
arXiv Detail & Related papers (2023-06-05T22:09:06Z) - RadioPathomics: Multimodal Learning in Non-Small Cell Lung Cancer for
Adaptive Radiotherapy [1.8161758803237067]
We develop a multimodal late fusion approach to predict radiation therapy outcomes for non-small-cell lung cancer patients.
Experiments show that the proposed multimodal paradigm with an AUC equal to $90.9%$ outperforms each unimodal approach.
arXiv Detail & Related papers (2022-04-26T16:32:52Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Predicting Distant Metastases in Soft-Tissue Sarcomas from PET-CT scans
using Constrained Hierarchical Multi-Modality Feature Learning [14.60163613315816]
Distant metastases (DM) are the leading cause of death in patients with soft-tissue sarcomas (STSs)
It is difficult to determine from imaging studies which STS patients will develop metastases.
We outline a new 3D CNN to help predict DM in patients from PET-CT data.
arXiv Detail & Related papers (2021-04-23T05:12:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.