Spatially Covariant Image Registration with Text Prompts
- URL: http://arxiv.org/abs/2311.15607v2
- Date: Tue, 6 Feb 2024 01:31:24 GMT
- Title: Spatially Covariant Image Registration with Text Prompts
- Authors: Xiang Chen, Min Liu, Rongguang Wang, Renjiu Hu, Dongdong Liu, Gaolei
Li, and Hang Zhang
- Abstract summary: TextSCF is a novel method that integrates spatially covariant filters and textual anatomical prompts encoded by visual-language models.
TextSCF boosts computational efficiency but can also retain or improve registration accuracy.
Its performance has been rigorously tested on inter-subject brain MRI and abdominal CT registration tasks.
- Score: 10.339385546491284
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical images are often characterized by their structured anatomical
representations and spatially inhomogeneous contrasts. Leveraging anatomical
priors in neural networks can greatly enhance their utility in
resource-constrained clinical settings. Prior research has harnessed such
information for image segmentation, yet progress in deformable image
registration has been modest. Our work introduces textSCF, a novel method that
integrates spatially covariant filters and textual anatomical prompts encoded
by visual-language models, to fill this gap. This approach optimizes an
implicit function that correlates text embeddings of anatomical regions to
filter weights, relaxing the typical translation-invariance constraint of
convolutional operations. TextSCF not only boosts computational efficiency but
can also retain or improve registration accuracy. By capturing the contextual
interplay between anatomical regions, it offers impressive inter-regional
transferability and the ability to preserve structural discontinuities during
registration. TextSCF's performance has been rigorously tested on inter-subject
brain MRI and abdominal CT registration tasks, outperforming existing
state-of-the-art models in the MICCAI Learn2Reg 2021 challenge and leading the
leaderboard. In abdominal registrations, textSCF's larger model variant
improved the Dice score by 11.3% over the second-best model, while its smaller
variant maintained similar accuracy but with an 89.13% reduction in network
parameters and a 98.34\% decrease in computational operations.
Related papers
- Boosting Medical Image Segmentation Performance with Adaptive Convolution Layer [6.887244952811574]
We propose an adaptive layer placed ahead of leading deep-learning models such as UCTransNet.
Our approach enhances the network's ability to handle diverse anatomical structures and subtle image details.
It consistently outperforms traditional CNNs with fixed kernel sizes with a similar number of parameters.
arXiv Detail & Related papers (2024-04-17T13:18:39Z) - Language Guided Domain Generalized Medical Image Segmentation [68.93124785575739]
Single source domain generalization holds promise for more reliable and consistent image segmentation across real-world clinical settings.
We propose an approach that explicitly leverages textual information by incorporating a contrastive learning mechanism guided by the text encoder features.
Our approach achieves favorable performance against existing methods in literature.
arXiv Detail & Related papers (2024-04-01T17:48:15Z) - ContourDiff: Unpaired Image-to-Image Translation with Structural Consistency for Medical Imaging [14.487188068402178]
We introduce a novel metric to quantify the structural bias between domains which must be considered for proper translation.
We then propose ContourDiff, a novel image-to-image translation algorithm that leverages domain-invariant anatomical contour representations.
We evaluate our method on challenging lumbar spine and hip-and-thigh CT-to-MRI translation tasks.
arXiv Detail & Related papers (2024-03-16T03:33:52Z) - SAME++: A Self-supervised Anatomical eMbeddings Enhanced medical image
registration framework using stable sampling and regularized transformation [19.683682147655496]
In this work, we introduce a fast and accurate method for unsupervised 3D medical image registration building on top of a Self-supervised Anatomical eMbedding algorithm.
We name our approach SAM-Enhanced registration (SAME++), which decomposes image registration into four steps: affine transformation, coarse deformation, deep non-parametric transformation, and instance optimization.
As a complete registration framework, SAME++ markedly outperforms leading methods by $4.2%$ - $8.2%$ in terms of Dice score.
arXiv Detail & Related papers (2023-11-25T10:11:04Z) - Region-based Contrastive Pretraining for Medical Image Retrieval with
Anatomic Query [56.54255735943497]
Region-based contrastive pretraining for Medical Image Retrieval (RegionMIR)
We introduce a novel Region-based contrastive pretraining for Medical Image Retrieval (RegionMIR)
arXiv Detail & Related papers (2023-05-09T16:46:33Z) - Collaborative Quantization Embeddings for Intra-Subject Prostate MR
Image Registration [13.1575656942321]
This paper describes a development in improving the learning-based registration algorithms.
We propose a hierarchical quantization method, discretizing the learned feature vectors using a jointly-trained dictionary.
Based on 216 real clinical images from 86 prostate cancer patients, we show the efficacy of both the designed components.
arXiv Detail & Related papers (2022-07-13T13:32:18Z) - A Deep Discontinuity-Preserving Image Registration Network [73.03885837923599]
Most deep learning-based registration methods assume that the desired deformation fields are globally smooth and continuous.
We propose a weakly-supervised Deep Discontinuity-preserving Image Registration network (DDIR) to obtain better registration performance and realistic deformation fields.
We demonstrate that our method achieves significant improvements in registration accuracy and predicts more realistic deformations, in registration experiments on cardiac magnetic resonance (MR) images.
arXiv Detail & Related papers (2021-07-09T13:35:59Z) - Automatic size and pose homogenization with spatial transformer network
to improve and accelerate pediatric segmentation [51.916106055115755]
We propose a new CNN architecture that is pose and scale invariant thanks to the use of Spatial Transformer Network (STN)
Our architecture is composed of three sequential modules that are estimated together during training.
We test the proposed method in kidney and renal tumor segmentation on abdominal pediatric CT scanners.
arXiv Detail & Related papers (2021-07-06T14:50:03Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z) - Learning Deformable Registration of Medical Images with Anatomical
Constraints [4.397224870979238]
Deformable image registration is a fundamental problem in the field of medical image analysis.
We learn global non-linear representations of image anatomy using segmentation masks, and employ them to constraint the registration process.
Our experiments show that the proposed anatomically constrained registration model produces more realistic and accurate results than state-of-the-art methods.
arXiv Detail & Related papers (2020-01-20T17:44:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.