Recurrent Image Registration using Mutual Attention based Network
- URL: http://arxiv.org/abs/2206.01863v1
- Date: Sat, 4 Jun 2022 00:35:14 GMT
- Title: Recurrent Image Registration using Mutual Attention based Network
- Authors: Jian-Qing Zheng, Ziyang Wang, Baoru Huang, Ngee Han Lim, Tonia
Vincent, Bartlomiej W. Papiez
- Abstract summary: We propose a new registration network combining recursive network architecture and mutual attention mechanism to overcome limitations.
Our network achieves the highest accuracy in lung Computed Tomography (CT) data set and one of the most accurate results in abdominal CT data set with 9 organs of various sizes.
- Score: 7.962754216042411
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image registration is an important task in medical imaging which estimates
the spatial transformation between different images. Many previous studies have
used learning-based methods for multi-stage registration to perform 3D image
registration to improve performance. The performance of the multi-stage
approach, however, is limited by the size of the receptive field where complex
motion does not occur at a single spatial scale. We propose a new registration
network combining recursive network architecture and mutual attention mechanism
to overcome these limitations. Compared with the previous deep learning
methods, our network based on the recursive structure achieves the highest
accuracy in lung Computed Tomography (CT) data set (Dice score of 92\% and
average surface distance of 3.8mm for lungs) and one of the most accurate
results in abdominal CT data set with 9 organs of various sizes (Dice score of
55\% and average surface distance of 7.8mm). We also showed that adding 3
recursive networks is sufficient to achieve the state-of-the-art results
without a significant increase in the inference time.
Related papers
- Recurrence With Correlation Network for Medical Image Registration [66.63200823918429]
We present Recurrence with Correlation Network (RWCNet), a medical image registration network with multi-scale features and a cost volume layer.
We demonstrate that these architectural features improve medical image registration accuracy in two image registration datasets.
arXiv Detail & Related papers (2023-02-05T02:41:46Z) - Scale-MAE: A Scale-Aware Masked Autoencoder for Multiscale Geospatial
Representation Learning [55.762840052788945]
We present Scale-MAE, a pretraining method that explicitly learns relationships between data at different, known scales.
We find that tasking the network with reconstructing both low/high frequency images leads to robust multiscale representations for remote sensing imagery.
arXiv Detail & Related papers (2022-12-30T03:15:34Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction [50.248694764703714]
Unrolled neural networks have recently achieved state-of-the-art accelerated MRI reconstruction.
These networks unroll iterative optimization algorithms by alternating between physics-based consistency and neural-network based regularization.
We propose Greedy LEarning for Accelerated MRI reconstruction, an efficient training strategy for high-dimensional imaging settings.
arXiv Detail & Related papers (2022-07-18T06:01:29Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Residual Aligner Network [8.542808644281433]
Motion-Aware (MA) structure captures different motions in a region.
New network achieves results which were indistinguishable from the best-ranked networks.
arXiv Detail & Related papers (2022-03-07T22:48:43Z) - Automatic size and pose homogenization with spatial transformer network
to improve and accelerate pediatric segmentation [51.916106055115755]
We propose a new CNN architecture that is pose and scale invariant thanks to the use of Spatial Transformer Network (STN)
Our architecture is composed of three sequential modules that are estimated together during training.
We test the proposed method in kidney and renal tumor segmentation on abdominal pediatric CT scanners.
arXiv Detail & Related papers (2021-07-06T14:50:03Z) - F3RNet: Full-Resolution Residual Registration Network for Deformable
Image Registration [21.99118499516863]
Deformable image registration (DIR) is essential for many image-guided therapies.
We propose a novel unsupervised registration network, namely the Full-Resolution Residual Registration Network (F3RNet)
One stream takes advantage of the full-resolution information that facilitates accurate voxel-level registration.
The other stream learns the deep multi-scale residual representations to obtain robust recognition.
arXiv Detail & Related papers (2020-09-15T15:05:54Z) - JSSR: A Joint Synthesis, Segmentation, and Registration System for 3D
Multi-Modal Image Alignment of Large-scale Pathological CT Scans [27.180136688977512]
We propose a novel multi-task learning system, JSSR, based on an end-to-end 3D convolutional neural network.
The system is optimized to satisfy the implicit constraints between different tasks in an unsupervised manner.
It consistently outperforms conventional state-of-the-art multi-modal registration methods.
arXiv Detail & Related papers (2020-05-25T16:30:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.