Cross-Modality Neuroimage Synthesis: A Survey
- URL: http://arxiv.org/abs/2202.06997v7
- Date: Thu, 21 Sep 2023 06:56:40 GMT
- Title: Cross-Modality Neuroimage Synthesis: A Survey
- Authors: Guoyang Xie, Yawen Huang, Jinbao Wang, Jiayi Lyu, Feng Zheng, Yefeng
Zheng, Yaochu Jin
- Abstract summary: Multi-modality imaging improves disease diagnosis and reveals distinct deviations in tissues with anatomical properties.
The existence of completely aligned and paired multi-modality neuroimaging data has proved its effectiveness in brain research.
An alternative solution is to explore unsupervised or weakly supervised learning methods to synthesize the absent neuroimaging data.
- Score: 71.27193056354741
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Multi-modality imaging improves disease diagnosis and reveals distinct
deviations in tissues with anatomical properties. The existence of completely
aligned and paired multi-modality neuroimaging data has proved its
effectiveness in brain research. However, collecting fully aligned and paired
data is expensive or even impractical, since it faces many difficulties,
including high cost, long acquisition time, image corruption, and privacy
issues. An alternative solution is to explore unsupervised or weakly supervised
learning methods to synthesize the absent neuroimaging data. In this paper, we
provide a comprehensive review of cross-modality synthesis for neuroimages,
from the perspectives of weakly supervised and unsupervised settings, loss
functions, evaluation metrics, imaging modalities, datasets, and downstream
applications based on synthesis. We begin by highlighting several opening
challenges for cross-modality neuroimage synthesis. Then, we discuss
representative architectures of cross-modality synthesis methods under
different supervisions. This is followed by a stepwise in-depth analysis to
evaluate how cross-modality neuroimage synthesis improves the performance of
its downstream tasks. Finally, we summarize the existing research findings and
point out future research directions. All resources are available at
https://github.com/M-3LAB/awesome-multimodal-brain-image-systhesis
Related papers
- NeuroFly: A framework for whole-brain single neuron reconstruction [17.93211301158225]
We introduce NeuroFly, a validated framework for large-scale automatic single neuron reconstruction.
NeuroFly breaks down the process into three distinct stages: segmentation, connection, and proofreading.
Our goal is to foster collaboration among researchers to address the neuron reconstruction challenge.
arXiv Detail & Related papers (2024-11-07T13:56:13Z) - Self-Supervised Pretext Tasks for Alzheimer's Disease Classification using 3D Convolutional Neural Networks on Large-Scale Synthetic Neuroimaging Dataset [11.173478552040441]
Alzheimer's Disease (AD) induces both localised and widespread neural degenerative changes throughout the brain.
In this work, we evaluated several unsupervised methods to train a feature extractor for downstream AD vs. CN classification.
arXiv Detail & Related papers (2024-06-20T11:26:32Z) - Disentangled Multimodal Brain MR Image Translation via Transformer-based
Modality Infuser [12.402947207350394]
We propose a transformer-based modality infuser designed to synthesize multimodal brain MR images.
In our method, we extract modality-agnostic features from the encoder and then transform them into modality-specific features.
We carried out experiments on the BraTS 2018 dataset, translating between four MR modalities.
arXiv Detail & Related papers (2024-02-01T06:34:35Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - A Discrepancy Aware Framework for Robust Anomaly Detection [51.710249807397695]
We present a Discrepancy Aware Framework (DAF), which demonstrates robust performance consistently with simple and cheap strategies.
Our method leverages an appearance-agnostic cue to guide the decoder in identifying defects, thereby alleviating its reliance on synthetic appearance.
Under the simple synthesis strategies, it outperforms existing methods by a large margin. Furthermore, it also achieves the state-of-the-art localization performance.
arXiv Detail & Related papers (2023-10-11T15:21:40Z) - SynthStrip: Skull-Stripping for Any Brain Image [7.846209440615028]
We introduce SynthStrip, a rapid, learning-based brain-extraction tool.
By leveraging anatomical segmentations, SynthStrip generates a synthetic training dataset with anatomies, intensity distributions, and artifacts that far exceed the realistic range of medical images.
We show substantial improvements in accuracy over popular skull-stripping baselines - all with a single trained model.
arXiv Detail & Related papers (2022-03-18T14:08:20Z) - Robust Segmentation of Brain MRI in the Wild with Hierarchical CNNs and
no Retraining [1.0499611180329802]
Retrospective analysis of brain MRI scans acquired in the clinic has the potential to enable neuroimaging studies with sample sizes much larger than those found in research datasets.
Recent advances in convolutional neural networks (CNNs) and domain randomisation for image segmentation may enable morphometry of clinical MRI at scale.
We show that SynthSeg is generally robust, but frequently falters on scans with low signal-to-noise ratio or poor tissue contrast.
We propose SynthSeg+, a novel method that greatly mitigates these problems using a hierarchy of conditional segmentation and denoising CNNs.
arXiv Detail & Related papers (2022-03-03T19:18:28Z) - Multimodal Image Synthesis and Editing: The Generative AI Era [131.9569600472503]
multimodal image synthesis and editing has become a hot research topic in recent years.
We comprehensively contextualize the advance of the recent multimodal image synthesis and editing.
We describe benchmark datasets and evaluation metrics as well as corresponding experimental results.
arXiv Detail & Related papers (2021-12-27T10:00:16Z) - Brain Image Synthesis with Unsupervised Multivariate Canonical
CSC$\ell_4$Net [122.8907826672382]
We propose to learn dedicated features that cross both intre- and intra-modal variations using a novel CSC$ell_4$Net.
arXiv Detail & Related papers (2021-03-22T05:19:40Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.