TLRN: Temporal Latent Residual Networks For Large Deformation Image Registration
- URL: http://arxiv.org/abs/2407.11219v2
- Date: Wed, 24 Jul 2024 02:45:37 GMT
- Title: TLRN: Temporal Latent Residual Networks For Large Deformation Image Registration
- Authors: Nian Wu, Jiarui Xing, Miaomiao Zhang,
- Abstract summary: This paper presents a novel approach, termed em Temporal Latent Residual Network (TLRN), to predict a sequence of deformation fields in time-series image registration.
Our proposedN highlights a temporal residual network with residual blocks carefully designed in latent deformation spaces, which are parameterized by time-seial initial velocity fields.
- Score: 4.272666443603612
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a novel approach, termed {\em Temporal Latent Residual Network (TLRN)}, to predict a sequence of deformation fields in time-series image registration. The challenge of registering time-series images often lies in the occurrence of large motions, especially when images differ significantly from a reference (e.g., the start of a cardiac cycle compared to the peak stretching phase). To achieve accurate and robust registration results, we leverage the nature of motion continuity and exploit the temporal smoothness in consecutive image frames. Our proposed TLRN highlights a temporal residual network with residual blocks carefully designed in latent deformation spaces, which are parameterized by time-sequential initial velocity fields. We treat a sequence of residual blocks over time as a dynamic training system, where each block is designed to learn the residual function between desired deformation features and current input accumulated from previous time frames. We validate the effectivenss of TLRN on both synthetic data and real-world cine cardiac magnetic resonance (CMR) image videos. Our experimental results shows that TLRN is able to achieve substantially improved registration accuracy compared to the state-of-the-art. Our code is publicly available at https://github.com/nellie689/TLRN.
Related papers
- CT-OT Flow: Estimating Continuous-Time Dynamics from Discrete Temporal Snapshots [8.656560659184303]
We propose Continuous-Time Optimal Transport Flow (CT-OT Flow), which infers high-resolution time labels via partial optimal transport and reconstructs a continuous-time data distribution through a temporal kernel smoothing.<n>CT-OT Flow consistently outperforms state-of-the-art methods on synthetic benchmarks and achieves lower reconstruction errors on real scRNA-seq and typhoon-track datasets.
arXiv Detail & Related papers (2025-05-23T00:12:49Z) - ABN: Anti-Blur Neural Networks for Multi-Stage Deformable Image
Registration [20.054872823030454]
Deformable image registration serves as an essential preprocessing step for neuroimaging data.
We propose a novel solution, called Anti-Blur Network (ABN), for multi-stage image registration.
arXiv Detail & Related papers (2022-12-06T19:21:43Z) - Joint segmentation and discontinuity-preserving deformable registration:
Application to cardiac cine-MR images [74.99415008543276]
Most deep learning-based registration methods assume that the deformation fields are smooth and continuous everywhere in the image domain.
We propose a novel discontinuity-preserving image registration method to tackle this challenge, which ensures globally discontinuous and locally smooth deformation fields.
A co-attention block is proposed in the segmentation component of the network to learn the structural correlations in the input images.
We evaluate our method on the task of intra-subject-temporal image registration using large-scale cinematic cardiac magnetic resonance image sequences.
arXiv Detail & Related papers (2022-11-24T23:45:01Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - Recurrence-in-Recurrence Networks for Video Deblurring [58.49075799159015]
State-of-the-art video deblurring methods often adopt recurrent neural networks to model the temporal dependency between the frames.
In this paper, we propose recurrence-in-recurrence network architecture to cope with the limitations of short-ranged memory.
arXiv Detail & Related papers (2022-03-12T11:58:13Z) - Closed-loop Feedback Registration for Consecutive Images of Moving
Flexible Targets [4.61174541905193]
We propose a closed-loop feedback registration algorithm for matching and stitching the deformable printed patterns on a moving flexible substrate.
Our results show that our algorithm can find more matching point pairs with a lower root mean squared error (RMSE) compared to other state-of-the-art algorithms.
arXiv Detail & Related papers (2021-10-20T20:31:43Z) - A Deep Discontinuity-Preserving Image Registration Network [73.03885837923599]
Most deep learning-based registration methods assume that the desired deformation fields are globally smooth and continuous.
We propose a weakly-supervised Deep Discontinuity-preserving Image Registration network (DDIR) to obtain better registration performance and realistic deformation fields.
We demonstrate that our method achieves significant improvements in registration accuracy and predicts more realistic deformations, in registration experiments on cardiac magnetic resonance (MR) images.
arXiv Detail & Related papers (2021-07-09T13:35:59Z) - Deep Convolutional Neural Network for Non-rigid Image Registration [0.0]
In this report, I will explore the ability of a deep neural network (DNN) and, more specifically, a deep convolutional neural network (CNN) to efficiently perform non-rigid image registration.
The experimental results show that a CNN can be used for efficient non-rigid image registration and in significantly less computational time than a conventional Diffeomorphic Demons or Pyramiding approach.
arXiv Detail & Related papers (2021-04-24T23:24:29Z) - Test-Time Training for Deformable Multi-Scale Image Registration [15.523457398508263]
Deep learning-based registration approaches such as VoxelMorph have been emerging and achieve competitive performance.
We construct a test-time training for deep deformable image registration to improve the generalization ability of conventional learning-based registration model.
arXiv Detail & Related papers (2021-03-25T03:22:59Z) - Multi-Temporal Convolutions for Human Action Recognition in Videos [83.43682368129072]
We present a novel temporal-temporal convolution block that is capable of extracting at multiple resolutions.
The proposed blocks are lightweight and can be integrated into any 3D-CNN architecture.
arXiv Detail & Related papers (2020-11-08T10:40:26Z) - Deep Group-wise Variational Diffeomorphic Image Registration [3.0022455491411653]
We propose to extend current learning-based image registration to allow simultaneous registration of multiple images.
We present a general mathematical framework that enables both registration of multiple images to their viscous geodesic average and registration in which any of the available images can be used as a fixed image.
arXiv Detail & Related papers (2020-10-01T07:37:28Z) - A Prospective Study on Sequence-Driven Temporal Sampling and Ego-Motion
Compensation for Action Recognition in the EPIC-Kitchens Dataset [68.8204255655161]
Action recognition is one of the top-challenging research fields in computer vision.
ego-motion recorded sequences have become of important relevance.
The proposed method aims to cope with it by estimating this ego-motion or camera motion.
arXiv Detail & Related papers (2020-08-26T14:44:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.