Progressive Retinal Image Registration via Global and Local Deformable Transformations
- URL: http://arxiv.org/abs/2409.01068v2
- Date: Wed, 16 Oct 2024 07:49:27 GMT
- Title: Progressive Retinal Image Registration via Global and Local Deformable Transformations
- Authors: Yepeng Liu, Baosheng Yu, Tian Chen, Yuliang Gu, Bo Du, Yongchao Xu, Jun Cheng,
- Abstract summary: We propose a hybrid registration framework called HybridRetina.
We use a keypoint detector and a deformation network called GAMorph to estimate the global transformation and local deformable transformation.
Experiments on two widely-used datasets, FIRE and FLoRI21, show that our proposed HybridRetina significantly outperforms some state-of-the-art methods.
- Score: 49.032894312826244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Retinal image registration plays an important role in the ophthalmological diagnosis process. Since there exist variances in viewing angles and anatomical structures across different retinal images, keypoint-based approaches become the mainstream methods for retinal image registration thanks to their robustness and low latency. These methods typically assume the retinal surfaces are planar, and adopt feature matching to obtain the homography matrix that represents the global transformation between images. Yet, such a planar hypothesis inevitably introduces registration errors since retinal surface is approximately curved. This limitation is more prominent when registering image pairs with significant differences in viewing angles. To address this problem, we propose a hybrid registration framework called HybridRetina, which progressively registers retinal images with global and local deformable transformations. For that, we use a keypoint detector and a deformation network called GAMorph to estimate the global transformation and local deformable transformation, respectively. Specifically, we integrate multi-level pixel relation knowledge to guide the training of GAMorph. Additionally, we utilize an edge attention module that includes the geometric priors of the images, ensuring the deformation field focuses more on the vascular regions of clinical interest. Experiments on two widely-used datasets, FIRE and FLoRI21, show that our proposed HybridRetina significantly outperforms some state-of-the-art methods. The code is available at https://github.com/lyp-deeplearning/awesome-retinal-registration.
Related papers
- RetinaRegNet: A Zero-Shot Approach for Retinal Image Registration [10.430563602981705]
RetinaRegNet is a zero-shot registration model designed to register retinal images with minimal overlap, large deformations, and varying image quality.
We implement a two-stage registration framework to handle large deformations.
Our model consistently outperformed state-of-the-art methods across all datasets.
arXiv Detail & Related papers (2024-04-24T17:50:37Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - VesselMorph: Domain-Generalized Retinal Vessel Segmentation via
Shape-Aware Representation [12.194439938007672]
Domain shift is an inherent property of medical images and has become a major obstacle for large-scale deployment of learning-based algorithms.
We propose a method named VesselMorph which generalizes the 2D retinal vessel segmentation task by synthesizing a shape-aware representation.
VesselMorph achieves superior generalization performance compared with competing methods in different domain shift scenarios.
arXiv Detail & Related papers (2023-07-01T06:02:22Z) - Learning Homeomorphic Image Registration via Conformal-Invariant
Hyperelastic Regularisation [9.53064372566798]
We propose a novel framework for deformable image registration based on conformal-invariant properties.
Our regulariser enforces the deformation field yielding to be smooth, invertible and orientation-preserving.
We demonstrate, through numerical and visual experiments, that our framework is able to outperform current techniques for image registration.
arXiv Detail & Related papers (2023-03-14T17:47:18Z) - Joint segmentation and discontinuity-preserving deformable registration:
Application to cardiac cine-MR images [74.99415008543276]
Most deep learning-based registration methods assume that the deformation fields are smooth and continuous everywhere in the image domain.
We propose a novel discontinuity-preserving image registration method to tackle this challenge, which ensures globally discontinuous and locally smooth deformation fields.
A co-attention block is proposed in the segmentation component of the network to learn the structural correlations in the input images.
We evaluate our method on the task of intra-subject-temporal image registration using large-scale cinematic cardiac magnetic resonance image sequences.
arXiv Detail & Related papers (2022-11-24T23:45:01Z) - Segmentation-guided Domain Adaptation and Data Harmonization of
Multi-device Retinal Optical Coherence Tomography using Cycle-Consistent
Generative Adversarial Networks [2.968191199408213]
This paper proposes a segmentation-guided domain-adaptation method to adapt images from multiple devices into single image domain.
It avoids the time consumption of manual labelling for the upcoming new dataset and the re-training of the existing network.
arXiv Detail & Related papers (2022-08-31T05:06:00Z) - SD-LayerNet: Semi-supervised retinal layer segmentation in OCT using
disentangled representation with anatomical priors [4.2663199451998475]
We introduce a semi-supervised paradigm into the retinal layer segmentation task.
In particular, a novel fully differentiable approach is used for converting surface position regression into a pixel-wise structured segmentation.
In parallel, we propose a set of anatomical priors to improve network training when a limited amount of labeled data is available.
arXiv Detail & Related papers (2022-07-01T14:30:59Z) - Dual-Flow Transformation Network for Deformable Image Registration with
Region Consistency Constraint [95.30864269428808]
Current deep learning (DL)-based image registration approaches learn the spatial transformation from one image to another by leveraging a convolutional neural network.
We present a novel dual-flow transformation network with region consistency constraint which maximizes the similarity of ROIs within a pair of images.
Experiments on four public 3D MRI datasets show that the proposed method achieves the best registration performance in accuracy and generalization.
arXiv Detail & Related papers (2021-12-04T05:30:44Z) - A Deep Discontinuity-Preserving Image Registration Network [73.03885837923599]
Most deep learning-based registration methods assume that the desired deformation fields are globally smooth and continuous.
We propose a weakly-supervised Deep Discontinuity-preserving Image Registration network (DDIR) to obtain better registration performance and realistic deformation fields.
We demonstrate that our method achieves significant improvements in registration accuracy and predicts more realistic deformations, in registration experiments on cardiac magnetic resonance (MR) images.
arXiv Detail & Related papers (2021-07-09T13:35:59Z) - Fine-grained Image-to-Image Transformation towards Visual Recognition [102.51124181873101]
We aim at transforming an image with a fine-grained category to synthesize new images that preserve the identity of the input image.
We adopt a model based on generative adversarial networks to disentangle the identity related and unrelated factors of an image.
Experiments on the CompCars and Multi-PIE datasets demonstrate that our model preserves the identity of the generated images much better than the state-of-the-art image-to-image transformation models.
arXiv Detail & Related papers (2020-01-12T05:26:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.