Multi-Modality Information Fusion for Radiomics-based Neural
Architecture Search
- URL: http://arxiv.org/abs/2007.06002v1
- Date: Sun, 12 Jul 2020 14:35:13 GMT
- Title: Multi-Modality Information Fusion for Radiomics-based Neural
Architecture Search
- Authors: Yige Peng, Lei Bi, Michael Fulham, Dagan Feng, and Jinman Kim
- Abstract summary: Existing radiomics methods require the design of hand-crafted radiomic features and their extraction and selection.
Recent radiomics methods, based on convolutional neural networks (CNNs), also require manual input in network architecture design.
We propose a multi-modality neural architecture search method (MM-NAS) to automatically derive optimal multi-modality image features for radiomics.
- Score: 10.994223928445589
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 'Radiomics' is a method that extracts mineable quantitative features from
radiographic images. These features can then be used to determine prognosis,
for example, predicting the development of distant metastases (DM). Existing
radiomics methods, however, require complex manual effort including the design
of hand-crafted radiomic features and their extraction and selection. Recent
radiomics methods, based on convolutional neural networks (CNNs), also require
manual input in network architecture design and hyper-parameter tuning.
Radiomic complexity is further compounded when there are multiple imaging
modalities, for example, combined positron emission tomography - computed
tomography (PET-CT) where there is functional information from PET and
complementary anatomical localization information from computed tomography
(CT). Existing multi-modality radiomics methods manually fuse the data that are
extracted separately. Reliance on manual fusion often results in sub-optimal
fusion because they are dependent on an 'expert's' understanding of medical
images. In this study, we propose a multi-modality neural architecture search
method (MM-NAS) to automatically derive optimal multi-modality image features
for radiomics and thus negate the dependence on a manual process. We evaluated
our MM-NAS on the ability to predict DM using a public PET-CT dataset of
patients with soft-tissue sarcomas (STSs). Our results show that our MM-NAS had
a higher prediction accuracy when compared to state-of-the-art radiomics
methods.
Related papers
- MRI Parameter Mapping via Gaussian Mixture VAE: Breaking the Assumption of Independent Pixels [3.720246718519987]
We introduce and demonstrate a new paradigm for quantitative parameter mapping in MRI.
We propose a self-supervised deep variational approach that breaks the assumption of independent pixels.
Our approach can hence support the clinical adoption of parameter mapping methods such as dMRI and qMRI.
arXiv Detail & Related papers (2024-11-16T11:11:36Z) - Leveraging Multimodal CycleGAN for the Generation of Anatomically Accurate Synthetic CT Scans from MRIs [1.779948689352186]
We analyse the capabilities of different configurations of Deep Learning models to generate synthetic CT scans from MRI.
Several CycleGAN models were trained unsupervised to generate CT scans from different MRI modalities with and without contrast agents.
The results show how, depending on the input modalities, the models can have very different performances.
arXiv Detail & Related papers (2024-07-15T16:38:59Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - Disentangled Multimodal Brain MR Image Translation via Transformer-based
Modality Infuser [12.402947207350394]
We propose a transformer-based modality infuser designed to synthesize multimodal brain MR images.
In our method, we extract modality-agnostic features from the encoder and then transform them into modality-specific features.
We carried out experiments on the BraTS 2018 dataset, translating between four MR modalities.
arXiv Detail & Related papers (2024-02-01T06:34:35Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Beyond Images: An Integrative Multi-modal Approach to Chest X-Ray Report
Generation [47.250147322130545]
Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images.
Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists.
We present a novel multi-modal deep neural network framework for generating chest X-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes.
arXiv Detail & Related papers (2023-11-18T14:37:53Z) - Radiology-Llama2: Best-in-Class Large Language Model for Radiology [71.27700230067168]
This paper introduces Radiology-Llama2, a large language model specialized for radiology through a process known as instruction tuning.
Quantitative evaluations using ROUGE metrics on the MIMIC-CXR and OpenI datasets demonstrate that Radiology-Llama2 achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-08-29T17:44:28Z) - Optimizing Sampling Patterns for Compressed Sensing MRI with Diffusion
Generative Models [75.52575380824051]
We present a learning method to optimize sub-sampling patterns for compressed sensing multi-coil MRI.
We use a single-step reconstruction based on the posterior mean estimate given by the diffusion model and the MRI measurement process.
Our method requires as few as five training images to learn effective sampling patterns.
arXiv Detail & Related papers (2023-06-05T22:09:06Z) - RadioPathomics: Multimodal Learning in Non-Small Cell Lung Cancer for
Adaptive Radiotherapy [1.8161758803237067]
We develop a multimodal late fusion approach to predict radiation therapy outcomes for non-small-cell lung cancer patients.
Experiments show that the proposed multimodal paradigm with an AUC equal to $90.9%$ outperforms each unimodal approach.
arXiv Detail & Related papers (2022-04-26T16:32:52Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Predicting Distant Metastases in Soft-Tissue Sarcomas from PET-CT scans
using Constrained Hierarchical Multi-Modality Feature Learning [14.60163613315816]
Distant metastases (DM) are the leading cause of death in patients with soft-tissue sarcomas (STSs)
It is difficult to determine from imaging studies which STS patients will develop metastases.
We outline a new 3D CNN to help predict DM in patients from PET-CT data.
arXiv Detail & Related papers (2021-04-23T05:12:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.