FlowReg: Fast Deformable Unsupervised Medical Image Registration using
Optical Flow
- URL: http://arxiv.org/abs/2101.09639v1
- Date: Sun, 24 Jan 2021 03:51:34 GMT
- Title: FlowReg: Fast Deformable Unsupervised Medical Image Registration using
Optical Flow
- Authors: Sergiu Mocanu, Alan R. Moody, April Khademi
- Abstract summary: FlowReg is a framework for unsupervised image registration for neuroimaging applications.
FlowReg is able to obtain high intensity and spatial similarity while maintaining the shape and structure of anatomy and pathology.
- Score: 0.09167082845109438
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose FlowReg, a deep learning-based framework for unsupervised image
registration for neuroimaging applications. The system is composed of two
architectures that are trained sequentially: FlowReg-A which affinely corrects
for gross differences between moving and fixed volumes in 3D followed by
FlowReg-O which performs pixel-wise deformations on a slice-by-slice basis for
fine tuning in 2D. The affine network regresses the 3D affine matrix based on a
correlation loss function that enforces global similarity. The deformable
network operates on 2D image slices based on the optical flow network
FlowNet-Simple but with three loss components. The photometric loss minimizes
pixel intensity differences differences, the smoothness loss encourages similar
magnitudes between neighbouring vectors, and a correlation loss that is used to
maintain the intensity similarity between fixed and moving image slices. The
proposed method is compared to four open source registration techniques ANTs,
Demons, SE, and Voxelmorph. In total, 4643 FLAIR MR imaging volumes are used
from dementia and vascular disease cohorts, acquired from over 60 international
centres with varying acquisition parameters. A battery of quantitative novel
registration validation metrics are proposed that focus on the structural
integrity of tissues, spatial alignment, and intensity similarity. Experimental
results show FlowReg (FlowReg-A+O) performs better than iterative-based
registration algorithms for intensity and spatial alignment metrics with a
Pixelwise Agreement of 0.65, correlation coefficient of 0.80, and Mutual
Information of 0.29. Among the deep learning frameworks, FlowReg-A or
FlowReg-A+O provided the highest performance over all but one of the metrics.
Results show that FlowReg is able to obtain high intensity and spatial
similarity while maintaining the shape and structure of anatomy and pathology.
Related papers
- FLD+: Data-efficient Evaluation Metric for Generative Models [4.093503153499691]
We introduce a new metric to assess the quality of generated images that is more reliable, data-efficient, compute-efficient, and adaptable to new domains.
The proposed metric is based on normalizing flows, which allows for the computation of density (exact log-likelihood) of images from any domain.
arXiv Detail & Related papers (2024-11-23T15:12:57Z) - WiNet: Wavelet-based Incremental Learning for Efficient Medical Image Registration [68.25711405944239]
Deep image registration has demonstrated exceptional accuracy and fast inference.
Recent advances have adopted either multiple cascades or pyramid architectures to estimate dense deformation fields in a coarse-to-fine manner.
We introduce a model-driven WiNet that incrementally estimates scale-wise wavelet coefficients for the displacement/velocity field across various scales.
arXiv Detail & Related papers (2024-07-18T11:51:01Z) - DA-Flow: Dual Attention Normalizing Flow for Skeleton-based Video Anomaly Detection [52.74152717667157]
We propose a lightweight module called Dual Attention Module (DAM) for capturing cross-dimension interaction relationships in-temporal skeletal data.
It employs the frame attention mechanism to identify the most significant frames and the skeleton attention mechanism to capture broader relationships across fixed partitions with minimal parameters and flops.
arXiv Detail & Related papers (2024-06-05T06:18:03Z) - RetinaRegNet: A Zero-Shot Approach for Retinal Image Registration [10.430563602981705]
RetinaRegNet is a zero-shot registration model designed to register retinal images with minimal overlap, large deformations, and varying image quality.
We implement a two-stage registration framework to handle large deformations.
Our model consistently outperformed state-of-the-art methods across all datasets.
arXiv Detail & Related papers (2024-04-24T17:50:37Z) - AI pipeline for accurate retinal layer segmentation using OCT 3D images [3.938455123895825]
Several classical and AI-based algorithms in combination are tested to see their compatibility with data from the combined animal imaging system.
A simple-to-implement analytical equation is shown to be working for brightness manipulation with a 1% increment in mean pixel values.
The thickness estimation process has a 6% error as compared to manual annotated standard data.
arXiv Detail & Related papers (2023-02-15T17:46:32Z) - CRAFT: Cross-Attentional Flow Transformer for Robust Optical Flow [23.457898451057275]
Optical flow estimation aims to find the 2D motion field by identifying corresponding pixels between two images.
Despite the tremendous progress of deep learning-based optical flow methods, it remains a challenge to accurately estimate large displacements with motion blur.
This is mainly because the correlation volume, the basis of pixel matching, is computed as the dot product of the convolutional features of the two images.
We propose a new architecture "CRoss-Attentional Flow Transformer" (CRAFT) to revitalize the correlation volume computation.
arXiv Detail & Related papers (2022-03-31T09:05:00Z) - A Robust Multimodal Remote Sensing Image Registration Method and System
Using Steerable Filters with First- and Second-order Gradients [7.813406811407584]
Co-registration of multimodal remote sensing images is still an ongoing challenge because of nonlinear radiometric differences (NRD) and significant geometric distortions.
In this paper, a robust matching method based on the Steerable filters is proposed consisting of two critical steps.
The performance of the proposed matching method has been evaluated with many different kinds of multimodal RS images.
arXiv Detail & Related papers (2022-02-27T12:22:42Z) - Hierarchical Conditional Flow: A Unified Framework for Image
Super-Resolution and Image Rescaling [139.25215100378284]
We propose a hierarchical conditional flow (HCFlow) as a unified framework for image SR and image rescaling.
HCFlow learns a mapping between HR and LR image pairs by modelling the distribution of the LR image and the rest high-frequency component simultaneously.
To further enhance the performance, other losses such as perceptual loss and GAN loss are combined with the commonly used negative log-likelihood loss in training.
arXiv Detail & Related papers (2021-08-11T16:11:01Z) - Learning Optical Flow from a Few Matches [67.83633948984954]
We show that the dense correlation volume representation is redundant and accurate flow estimation can be achieved with only a fraction of elements in it.
Experiments show that our method can reduce computational cost and memory use significantly, while maintaining high accuracy.
arXiv Detail & Related papers (2021-04-05T21:44:00Z) - ResNet-LDDMM: Advancing the LDDMM Framework Using Deep Residual Networks [86.37110868126548]
In this work, we make use of deep residual neural networks to solve the non-stationary ODE (flow equation) based on a Euler's discretization scheme.
We illustrate these ideas on diverse registration problems of 3D shapes under complex topology-preserving transformations.
arXiv Detail & Related papers (2021-02-16T04:07:13Z) - FPCR-Net: Feature Pyramidal Correlation and Residual Reconstruction for
Optical Flow Estimation [72.41370576242116]
We propose a semi-supervised Feature Pyramidal Correlation and Residual Reconstruction Network (FPCR-Net) for optical flow estimation from frame pairs.
It consists of two main modules: pyramid correlation mapping and residual reconstruction.
Experiment results show that the proposed scheme achieves the state-of-the-art performance, with improvement by 0.80, 1.15 and 0.10 in terms of average end-point error (AEE) against competing baseline methods.
arXiv Detail & Related papers (2020-01-17T07:13:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.