Tell2Reg: Establishing spatial correspondence between images by the same language prompts
- URL: http://arxiv.org/abs/2502.03118v1
- Date: Wed, 05 Feb 2025 12:25:02 GMT
- Title: Tell2Reg: Establishing spatial correspondence between images by the same language prompts
- Authors: Wen Yan, Qianye Yang, Shiqi Huang, Yipei Wang, Shonit Punwani, Mark Emberton, Vasilis Stavrinides, Yipeng Hu, Dean Barratt,
- Abstract summary: We show that a corresponding region pair can be predicted by the same language prompt on two different images.
This enables a fully automated and training-free registration algorithm.
Tell2Reg is training-free, eliminating the need for costly and time-consuming data curation and labelling.
- Score: 7.064676360230362
- License:
- Abstract: Spatial correspondence can be represented by pairs of segmented regions, such that the image registration networks aim to segment corresponding regions rather than predicting displacement fields or transformation parameters. In this work, we show that such a corresponding region pair can be predicted by the same language prompt on two different images using the pre-trained large multimodal models based on GroundingDINO and SAM. This enables a fully automated and training-free registration algorithm, potentially generalisable to a wide range of image registration tasks. In this paper, we present experimental results using one of the challenging tasks, registering inter-subject prostate MR images, which involves both highly variable intensity and morphology between patients. Tell2Reg is training-free, eliminating the need for costly and time-consuming data curation and labelling that was previously required for this registration task. This approach outperforms unsupervised learning-based registration methods tested, and has a performance comparable to weakly-supervised methods. Additional qualitative results are also presented to suggest that, for the first time, there is a potential correlation between language semantics and spatial correspondence, including the spatial invariance in language-prompted regions and the difference in language prompts between the obtained local and global correspondences. Code is available at https://github.com/yanwenCi/Tell2Reg.git.
Related papers
- MsMorph: An Unsupervised pyramid learning network for brain image registration [4.000367245594772]
MsMorph is an image registration framework aimed at mimicking the manual process of registering image pairs.
It decodes semantic information at different scales and continuously compen-sates for the predicted deformation field.
The proposed method simulates the manual approach to registration, focusing on different regions of the image pairs and their neighborhoods.
arXiv Detail & Related papers (2024-10-23T19:20:57Z) - SAMReg: SAM-enabled Image Registration with ROI-based Correspondence [12.163299991979574]
This paper describes a new spatial correspondence representation based on paired regions-of-interest (ROIs) for medical image registration.
We develop a new registration algorithm SAMReg, which does not require any training (or training data), gradient-based fine-tuning or prompt engineering.
The proposed methods outperform both intensity-based iterative algorithms and DDF-predicting learning-based networks across tested metrics.
arXiv Detail & Related papers (2024-10-17T23:23:48Z) - WIDIn: Wording Image for Domain-Invariant Representation in Single-Source Domain Generalization [63.98650220772378]
We present WIDIn, Wording Images for Domain-Invariant representation, to disentangle discriminative visual representation.
We first estimate the language embedding with fine-grained alignment, which can be used to adaptively identify and then remove domain-specific counterpart.
We show that WIDIn can be applied to both pretrained vision-language models like CLIP, and separately trained uni-modal models like MoCo and BERT.
arXiv Detail & Related papers (2024-05-28T17:46:27Z) - One registration is worth two segmentations [12.163299991979574]
The goal of image registration is to establish spatial correspondence between two or more images.
We propose an alternative but more intuitive correspondence representation: a set of corresponding regions-of-interest (ROI) pairs.
We experimentally show that the proposed SAMReg is capable of segmenting and matching multiple ROI pairs.
arXiv Detail & Related papers (2024-05-17T16:14:32Z) - MENTOR: Multilingual tExt detectioN TOward leaRning by analogy [59.37382045577384]
We propose a framework to detect and identify both seen and unseen language regions inside scene images.
"MENTOR" is the first work to realize a learning strategy between zero-shot learning and few-shot learning for multilingual scene text detection.
arXiv Detail & Related papers (2024-03-12T03:35:17Z) - CLIM: Contrastive Language-Image Mosaic for Region Representation [58.05870131126816]
Contrastive Language-Image Mosaic (CLIM) is a novel approach for aligning region and text representations.
CLIM consistently improves different open-vocabulary object detection methods.
It can effectively enhance the region representation of vision-language models.
arXiv Detail & Related papers (2023-12-18T17:39:47Z) - Spatial Correspondence between Graph Neural Network-Segmented Images [1.807691213023136]
Graph neural networks (GNNs) have been proposed for medical image segmentation.
This work explores the potentials in these GNNs with common topology for establishing spatial correspondence.
With an example application of registering local vertebral sub-regions found in CT images, our experimental results showed that the GNN-based segmentation is capable of accurate and reliable localization.
arXiv Detail & Related papers (2023-03-12T03:25:01Z) - Joint segmentation and discontinuity-preserving deformable registration:
Application to cardiac cine-MR images [74.99415008543276]
Most deep learning-based registration methods assume that the deformation fields are smooth and continuous everywhere in the image domain.
We propose a novel discontinuity-preserving image registration method to tackle this challenge, which ensures globally discontinuous and locally smooth deformation fields.
A co-attention block is proposed in the segmentation component of the network to learn the structural correlations in the input images.
We evaluate our method on the task of intra-subject-temporal image registration using large-scale cinematic cardiac magnetic resonance image sequences.
arXiv Detail & Related papers (2022-11-24T23:45:01Z) - Dense Siamese Network [86.23741104851383]
We present Dense Siamese Network (DenseSiam), a simple unsupervised learning framework for dense prediction tasks.
It learns visual representations by maximizing the similarity between two views of one image with two types of consistency, i.e., pixel consistency and region consistency.
It surpasses state-of-the-art segmentation methods by 2.1 mIoU with 28% training costs.
arXiv Detail & Related papers (2022-03-21T15:55:23Z) - Few-shot image segmentation for cross-institution male pelvic organs
using registration-assisted prototypical learning [13.567073992605797]
This work presents the first 3D few-shot interclass segmentation network for medical images.
It uses a labelled multi-institution dataset from prostate cancer patients with eight regions of interest.
A built-in registration mechanism can effectively utilise the prior knowledge of consistent anatomy between subjects.
arXiv Detail & Related papers (2022-01-17T11:44:10Z) - RegionCLIP: Region-based Language-Image Pretraining [94.29924084715316]
Contrastive language-image pretraining (CLIP) using image-text pairs has achieved impressive results on image classification.
We propose a new method called RegionCLIP that significantly extends CLIP to learn region-level visual representations.
Our method significantly outperforms the state of the art by 3.8 AP50 and 2.2 AP for novel categories on COCO and LVIS datasets.
arXiv Detail & Related papers (2021-12-16T18:39:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.