ORRN: An ODE-based Recursive Registration Network for Deformable
Respiratory Motion Estimation with Lung 4DCT Images
- URL: http://arxiv.org/abs/2305.14673v2
- Date: Thu, 25 May 2023 04:56:19 GMT
- Title: ORRN: An ODE-based Recursive Registration Network for Deformable
Respiratory Motion Estimation with Lung 4DCT Images
- Authors: Xiao Liang, Shan Lin, Fei Liu, Dimitri Schreiber, and Michael Yip
- Abstract summary: Deformable Image Registration (DIR) plays a significant role in quantifying deformation in medical data.
Recent Deep Learning methods have shown promising accuracy and speedup for registering a pair of medical images.
This paper presents ORRN, an Ordinary Differential Equations (ODE)-based recursion image registration network.
- Score: 7.180268723513929
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deformable Image Registration (DIR) plays a significant role in quantifying
deformation in medical data. Recent Deep Learning methods have shown promising
accuracy and speedup for registering a pair of medical images. However, in 4D
(3D + time) medical data, organ motion, such as respiratory motion and heart
beating, can not be effectively modeled by pair-wise methods as they were
optimized for image pairs but did not consider the organ motion patterns
necessary when considering 4D data. This paper presents ORRN, an Ordinary
Differential Equations (ODE)-based recursive image registration network. Our
network learns to estimate time-varying voxel velocities for an ODE that models
deformation in 4D image data. It adopts a recursive registration strategy to
progressively estimate a deformation field through ODE integration of voxel
velocities. We evaluate the proposed method on two publicly available lung 4DCT
datasets, DIRLab and CREATIS, for two tasks: 1) registering all images to the
extreme inhale image for 3D+t deformation tracking and 2) registering extreme
exhale to inhale phase images. Our method outperforms other learning-based
methods in both tasks, producing the smallest Target Registration Error of
1.24mm and 1.26mm, respectively. Additionally, it produces less than 0.001\%
unrealistic image folding, and the computation speed is less than 1 second for
each CT volume. ORRN demonstrates promising registration accuracy, deformation
plausibility, and computation efficiency on group-wise and pair-wise
registration tasks. It has significant implications in enabling fast and
accurate respiratory motion estimation for treatment planning in radiation
therapy or robot motion planning in thoracic needle insertion.
Related papers
- SAME: Deformable Image Registration based on Self-supervised Anatomical
Embeddings [16.38383865408585]
This work is built on a recent algorithm SAM, which is capable of computing dense anatomical/semantic correspondences between two images at the pixel level.
Our method is named SAME, which breaks down image registration into three steps: affine transformation, coarse deformation, and deep deformable registration.
arXiv Detail & Related papers (2021-09-23T18:03:11Z) - Deformable Image Registration using Neural ODEs [15.245085400790002]
We present a generic, fast, and accurate diffeomorphic image registration framework that leverages neural ordinary differential equations (NODEs)
Compared with traditional optimization-based methods, our framework reduces the running time from tens of minutes to tens of seconds.
Our experiments show that the registration results of our method outperform state-of-the-arts under various metrics.
arXiv Detail & Related papers (2021-08-07T12:54:17Z) - A Deep Discontinuity-Preserving Image Registration Network [73.03885837923599]
Most deep learning-based registration methods assume that the desired deformation fields are globally smooth and continuous.
We propose a weakly-supervised Deep Discontinuity-preserving Image Registration network (DDIR) to obtain better registration performance and realistic deformation fields.
We demonstrate that our method achieves significant improvements in registration accuracy and predicts more realistic deformations, in registration experiments on cardiac magnetic resonance (MR) images.
arXiv Detail & Related papers (2021-07-09T13:35:59Z) - Multi-scale Neural ODEs for 3D Medical Image Registration [7.715565365558909]
Image registration plays an important role in medical image analysis.
Deep learning methods such as learn-to-map are much faster but either iterative or coarse-to-fine approach is required to improve accuracy for handling large motions.
In this work, we proposed to learn a registration via a multi-scale neural ODE model.
arXiv Detail & Related papers (2021-06-16T00:26:53Z) - Learning a Model-Driven Variational Network for Deformable Image
Registration [89.9830129923847]
VR-Net is a novel cascaded variational network for unsupervised deformable image registration.
It outperforms state-of-the-art deep learning methods on registration accuracy.
It maintains the fast inference speed of deep learning and the data-efficiency of variational model.
arXiv Detail & Related papers (2021-05-25T21:37:37Z) - A Meta-Learning Approach for Medical Image Registration [6.518615946009265]
We propose a novel unsupervised registration model which is integrated with a gradient-based meta learning framework.
In our experiments, the proposed model obtained significantly improved performance in terms of accuracy and training time.
arXiv Detail & Related papers (2021-04-21T10:27:05Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - F3RNet: Full-Resolution Residual Registration Network for Deformable
Image Registration [21.99118499516863]
Deformable image registration (DIR) is essential for many image-guided therapies.
We propose a novel unsupervised registration network, namely the Full-Resolution Residual Registration Network (F3RNet)
One stream takes advantage of the full-resolution information that facilitates accurate voxel-level registration.
The other stream learns the deep multi-scale residual representations to obtain robust recognition.
arXiv Detail & Related papers (2020-09-15T15:05:54Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z) - Spatio-Temporal Deep Learning Methods for Motion Estimation Using 4D OCT
Image Data [63.73263986460191]
Localizing structures and estimating the motion of a specific target region are common problems for navigation during surgical interventions.
We investigate whether using a temporal stream of OCT image volumes can improve deep learning-based motion estimation performance.
Using 4D information for the model input improves performance while maintaining reasonable inference times.
arXiv Detail & Related papers (2020-04-21T15:43:01Z) - Multifold Acceleration of Diffusion MRI via Slice-Interleaved Diffusion
Encoding (SIDE) [50.65891535040752]
We propose a diffusion encoding scheme, called Slice-Interleaved Diffusion.
SIDE, that interleaves each diffusion-weighted (DW) image volume with slices encoded with different diffusion gradients.
We also present a method based on deep learning for effective reconstruction of DW images from the highly slice-undersampled data.
arXiv Detail & Related papers (2020-02-25T14:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.