CodeBrain: Imputing Any Brain MRI via Modality- and Instance-Specific Codes
- URL: http://arxiv.org/abs/2501.18328v2
- Date: Sun, 09 Mar 2025 02:55:58 GMT
- Title: CodeBrain: Imputing Any Brain MRI via Modality- and Instance-Specific Codes
- Authors: Yicheng Wu, Tao Song, Zhonghua Wu, Jin Ye, Zongyuan Ge, Zhaolin Chen, Jianfei Cai,
- Abstract summary: We propose CodeBrain, a pipeline for unified brain MRI imputation.<n>In the first stage, CodeBrain reconstructs a target modality by learning a compact scalar-quantized code for each instance and modality.<n>In the second stage, a projection encoder is trained to predict full-modality compact codes from any incomplete MRI samples.
- Score: 39.308423499912806
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Unified MRI imputation, which can adapt to diverse imputation scenarios, is highly desirable as it reduces scanning costs and provides comprehensive MRI information for improved clinical diagnosis. Existing unified MRI imputation methods either rely on specific prompts to guide their transformation network or require multiple modality-specific modules. However, these approaches struggle to capture large modality and instance variations or become too complex to generalize effectively. To address these limitations, we propose CodeBrain, a fundamentally different pipeline for unified brain MRI imputation. Our key idea is to reframe various inter-modality transformations as a full-modality code prediction task via a two-stage framework. In the first stage, CodeBrain reconstructs a target modality from any other modalities by learning a compact scalar-quantized code for each instance and modality. Any target modality can then be reconstructed with high fidelity by combining the corresponding code with shared features extracted from any available modality. In the second stage, a projection encoder is trained to predict full-modality compact codes from any incomplete MRI samples, effectively simulating various imputation scenarios. We evaluate our CodeBrain on two public brain MRI datasets (i.e., IXI and BraTS 2023). Extensive experiments demonstrate that CodeBrain outperforms state-of-the-art methods, setting a new benchmark for unified brain MRI imputation. Our code will be released.
Related papers
- UniBrain: A Unified Model for Cross-Subject Brain Decoding [22.49964298783508]
We present UniBrain, a unified brain decoding model that requires no subject-specific parameters.<n>Our approach includes a group-based extractor to handle variable fMRI signal lengths, a mutual assistance embedder to capture cross-subject commonalities, and a bilevel feature alignment scheme for extracting subject-invariant features.<n>We validate our UniBrain on the brain decoding benchmark, achieving comparable performance to current state-of-the-art subject-specific models with extremely fewer parameters.
arXiv Detail & Related papers (2024-12-27T07:03:47Z) - BrainSegFounder: Towards 3D Foundation Models for Neuroimage Segmentation [6.5388528484686885]
This study introduces a novel approach towards the creation of medical foundation models.
Our method involves a novel two-stage pretraining approach using vision transformers.
BrainFounder demonstrates a significant performance gain, surpassing the achievements of previous winning solutions.
arXiv Detail & Related papers (2024-06-14T19:49:45Z) - MindFormer: Semantic Alignment of Multi-Subject fMRI for Brain Decoding [50.55024115943266]
We introduce a novel semantic alignment method of multi-subject fMRI signals using so-called MindFormer.
This model is specifically designed to generate fMRI-conditioned feature vectors that can be used for conditioning Stable Diffusion model for fMRI- to-image generation or large language model (LLM) for fMRI-to-text generation.
Our experimental results demonstrate that MindFormer generates semantically consistent images and text across different subjects.
arXiv Detail & Related papers (2024-05-28T00:36:25Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - BrainCLIP: Bridging Brain and Visual-Linguistic Representation Via CLIP
for Generic Natural Visual Stimulus Decoding [51.911473457195555]
BrainCLIP is a task-agnostic fMRI-based brain decoding model.
It bridges the modality gap between brain activity, image, and text.
BrainCLIP can reconstruct visual stimuli with high semantic fidelity.
arXiv Detail & Related papers (2023-02-25T03:28:54Z) - DIGEST: Deeply supervIsed knowledGE tranSfer neTwork learning for brain
tumor segmentation with incomplete multi-modal MRI scans [16.93394669748461]
Brain tumor segmentation based on multi-modal magnetic resonance imaging (MRI) plays a pivotal role in assisting brain cancer diagnosis, treatment, and postoperative evaluations.
Despite the achieved inspiring performance by existing automatic segmentation methods, multi-modal MRI data are still unavailable in real-world clinical applications.
We propose a Deeply supervIsed knowledGE tranSfer neTwork (DIGEST), which achieves accurate brain tumor segmentation under different modality-missing scenarios.
arXiv Detail & Related papers (2022-11-15T09:01:14Z) - mmFormer: Multimodal Medical Transformer for Incomplete Multimodal
Learning of Brain Tumor Segmentation [38.22852533584288]
We propose a novel Medical Transformer (mmFormer) for incomplete multimodal learning with three main components.
The proposed mmFormer outperforms the state-of-the-art methods for incomplete multimodal brain tumor segmentation on almost all subsets of incomplete modalities.
arXiv Detail & Related papers (2022-06-06T08:41:56Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Neural Architecture Search for Gliomas Segmentation on Multimodal
Magnetic Resonance Imaging [2.66512000865131]
We propose a neural architecture search (NAS) based solution to brain tumor segmentation tasks on multimodal MRI scans.
The developed solution also integrates normalization and patching strategies tailored for brain MRI processing.
arXiv Detail & Related papers (2020-05-13T14:32:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.