Brain-ID: Learning Contrast-agnostic Anatomical Representations for
Brain Imaging
- URL: http://arxiv.org/abs/2311.16914v2
- Date: Sun, 10 Mar 2024 14:35:30 GMT
- Title: Brain-ID: Learning Contrast-agnostic Anatomical Representations for
Brain Imaging
- Authors: Peirong Liu and Oula Puonti and Xiaoling Hu and Daniel C. Alexander
and Juan E. Iglesias
- Abstract summary: We introduce Brain-ID, an anatomical representation learning model for brain imaging.
With the proposed "mild-to-severe" intrasubject generation, Brain-ID is robust to the subject-specific brain anatomy.
We present new metrics to validate the intra- and inter-subject robustness, and evaluate their performance on four downstream applications.
- Score: 11.06907516321673
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent learning-based approaches have made astonishing advances in calibrated
medical imaging like computerized tomography (CT), yet they struggle to
generalize in uncalibrated modalities -- notably magnetic resonance (MR)
imaging, where performance is highly sensitive to the differences in MR
contrast, resolution, and orientation. This prevents broad applicability to
diverse real-world clinical protocols. We introduce Brain-ID, an anatomical
representation learning model for brain imaging. With the proposed
"mild-to-severe" intra-subject generation, Brain-ID is robust to the
subject-specific brain anatomy regardless of the appearance of acquired images
(e.g., contrast, deformation, resolution, artifacts). Trained entirely on
synthetic data, Brain-ID readily adapts to various downstream tasks through
only one layer. We present new metrics to validate the intra- and inter-subject
robustness of Brain-ID features, and evaluate their performance on four
downstream applications, covering contrast-independent (anatomy
reconstruction/contrast synthesis, brain segmentation), and contrast-dependent
(super-resolution, bias field estimation) tasks. Extensive experiments on six
public datasets demonstrate that Brain-ID achieves state-of-the-art performance
in all tasks on different MRI modalities and CT, and more importantly,
preserves its performance on low-resolution and small datasets. Code is
available at https://github.com/peirong26/Brain-ID.
Related papers
- Knowledge-Guided Prompt Learning for Lifespan Brain MR Image Segmentation [53.70131202548981]
We present a two-step segmentation framework employing Knowledge-Guided Prompt Learning (KGPL) for brain MRI.
Specifically, we first pre-train segmentation models on large-scale datasets with sub-optimal labels.
The introduction of knowledge-wise prompts captures semantic relationships between anatomical variability and biological processes.
arXiv Detail & Related papers (2024-07-31T04:32:43Z) - BrainSegFounder: Towards 3D Foundation Models for Neuroimage Segmentation [6.5388528484686885]
This study introduces a novel approach towards the creation of medical foundation models.
Our method involves a novel two-stage pretraining approach using vision transformers.
BrainFounder demonstrates a significant performance gain, surpassing the achievements of previous winning solutions.
arXiv Detail & Related papers (2024-06-14T19:49:45Z) - Psychometry: An Omnifit Model for Image Reconstruction from Human Brain Activity [60.983327742457995]
Reconstructing the viewed images from human brain activity bridges human and computer vision through the Brain-Computer Interface.
We devise Psychometry, an omnifit model for reconstructing images from functional Magnetic Resonance Imaging (fMRI) obtained from different subjects.
arXiv Detail & Related papers (2024-03-29T07:16:34Z) - Aligning brain functions boosts the decoding of visual semantics in
novel subjects [3.226564454654026]
We propose to boost brain decoding by aligning brain responses to videos and static images across subjects.
Our method improves out-of-subject decoding performance by up to 75%.
It also outperforms classical single-subject approaches when fewer than 100 minutes of data is available for the tested subject.
arXiv Detail & Related papers (2023-12-11T15:55:20Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - UniBrain: Universal Brain MRI Diagnosis with Hierarchical
Knowledge-enhanced Pre-training [66.16134293168535]
We propose a hierarchical knowledge-enhanced pre-training framework for the universal brain MRI diagnosis, termed as UniBrain.
Specifically, UniBrain leverages a large-scale dataset of 24,770 imaging-report pairs from routine diagnostics.
arXiv Detail & Related papers (2023-09-13T09:22:49Z) - SAM vs BET: A Comparative Study for Brain Extraction and Segmentation of
Magnetic Resonance Images using Deep Learning [0.0]
Segment Anything Model (SAM) has the potential to emerge as a more accurate, robust and versatile tool for a broad range of brain extraction and segmentation applications.
We compare SAM with a widely used and current gold standard technique called BET on a variety of brain scans with varying image qualities, MR sequences, and brain lesions affecting different brain regions.
arXiv Detail & Related papers (2023-04-10T17:50:52Z) - DeepBrainPrint: A Novel Contrastive Framework for Brain MRI
Re-Identification [2.5855676778881334]
We propose an AI-powered framework called DeepBrainPrint to retrieve brain MRI scans of the same patient.
Our framework is a semi-self-supervised contrastive deep learning approach with three main innovations.
We tested DeepBrainPrint on a large dataset of T1-weighted brain MRIs from the Alzheimer's Disease Neuroimaging Initiative (ADNI)
arXiv Detail & Related papers (2023-02-25T11:03:16Z) - BrainCLIP: Bridging Brain and Visual-Linguistic Representation Via CLIP
for Generic Natural Visual Stimulus Decoding [51.911473457195555]
BrainCLIP is a task-agnostic fMRI-based brain decoding model.
It bridges the modality gap between brain activity, image, and text.
BrainCLIP can reconstruct visual stimuli with high semantic fidelity.
arXiv Detail & Related papers (2023-02-25T03:28:54Z) - FAST-AID Brain: Fast and Accurate Segmentation Tool using Artificial
Intelligence Developed for Brain [0.8376091455761259]
A novel deep learning method is proposed for fast and accurate segmentation of the human brain into 132 regions.
The proposed model uses an efficient U-Net-like network and benefits from the intersection points of different views and hierarchical relations.
The proposed method can be applied to brain MRI data including skull or any other artifacts without preprocessing the images or a drop in performance.
arXiv Detail & Related papers (2022-08-30T16:06:07Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.