Meta-Learning Initializations for Interactive Medical Image Registration
- URL: http://arxiv.org/abs/2210.15371v1
- Date: Thu, 27 Oct 2022 12:30:53 GMT
- Title: Meta-Learning Initializations for Interactive Medical Image Registration
- Authors: Zachary M.C. Baum, Yipeng Hu, Dean Barratt
- Abstract summary: This paper describes a specific algorithm that implements the registration, interaction and meta-learning protocol for our exemplar clinical application.
Applying sparsely sampled data to non-interactive methods yields higher registration errors (6.26 mm), demonstrating the effectiveness of interactive MR-TRUS registration.
- Score: 0.18750851274087482
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a meta-learning framework for interactive medical image
registration. Our proposed framework comprises three components: a
learning-based medical image registration algorithm, a form of user interaction
that refines registration at inference, and a meta-learning protocol that
learns a rapidly adaptable network initialization. This paper describes a
specific algorithm that implements the registration, interaction and
meta-learning protocol for our exemplar clinical application: registration of
magnetic resonance (MR) imaging to interactively acquired, sparsely-sampled
transrectal ultrasound (TRUS) images. Our approach obtains comparable
registration error (4.26 mm) to the best-performing non-interactive
learning-based 3D-to-3D method (3.97 mm) while requiring only a fraction of the
data, and occurring in real-time during acquisition. Applying sparsely sampled
data to non-interactive methods yields higher registration errors (6.26 mm),
demonstrating the effectiveness of interactive MR-TRUS registration, which may
be applied intraoperatively given the real-time nature of the adaptation
process.
Related papers
- SAMReg: SAM-enabled Image Registration with ROI-based Correspondence [12.163299991979574]
This paper describes a new spatial correspondence representation based on paired regions-of-interest (ROIs) for medical image registration.
We develop a new registration algorithm SAMReg, which does not require any training (or training data), gradient-based fine-tuning or prompt engineering.
The proposed methods outperform both intensity-based iterative algorithms and DDF-predicting learning-based networks across tested metrics.
arXiv Detail & Related papers (2024-10-17T23:23:48Z) - Intraoperative Registration by Cross-Modal Inverse Neural Rendering [61.687068931599846]
We present a novel approach for 3D/2D intraoperative registration during neurosurgery via cross-modal inverse neural rendering.
Our approach separates implicit neural representation into two components, handling anatomical structure preoperatively and appearance intraoperatively.
We tested our method on retrospective patients' data from clinical cases, showing that our method outperforms state-of-the-art while meeting current clinical standards for registration.
arXiv Detail & Related papers (2024-09-18T13:40:59Z) - Recurrent Inference Machine for Medical Image Registration [11.351457718409788]
We propose a novel image registration method, termed Recurrent Inference Image Registration (RIIR) network.
RIIR is formulated as a meta-learning solver to the registration problem in an iterative manner.
Our experiments showed that RIIR outperformed a range of deep learning-based methods, even with only $5%$ of the training data.
arXiv Detail & Related papers (2024-06-19T10:06:35Z) - Joint segmentation and discontinuity-preserving deformable registration:
Application to cardiac cine-MR images [74.99415008543276]
Most deep learning-based registration methods assume that the deformation fields are smooth and continuous everywhere in the image domain.
We propose a novel discontinuity-preserving image registration method to tackle this challenge, which ensures globally discontinuous and locally smooth deformation fields.
A co-attention block is proposed in the segmentation component of the network to learn the structural correlations in the input images.
We evaluate our method on the task of intra-subject-temporal image registration using large-scale cinematic cardiac magnetic resonance image sequences.
arXiv Detail & Related papers (2022-11-24T23:45:01Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - A Long Short-term Memory Based Recurrent Neural Network for
Interventional MRI Reconstruction [50.1787181309337]
We propose a convolutional long short-term memory (Conv-LSTM) based recurrent neural network (RNN), or ConvLR, to reconstruct interventional images with golden-angle radial sampling.
The proposed algorithm has the potential to achieve real-time i-MRI for DBS and can be used for general purpose MR-guided intervention.
arXiv Detail & Related papers (2022-03-28T14:03:45Z) - Automated Learning for Deformable Medical Image Registration by Jointly
Optimizing Network Architectures and Objective Functions [69.6849409155959]
This paper proposes an automated learning registration algorithm (AutoReg) that cooperatively optimize both architectures and their corresponding training objectives.
We conduct image registration experiments on multi-site volume datasets and various registration tasks.
Our results show that our AutoReg may automatically learn an optimal deep registration network for given volumes and achieve state-of-the-art performance.
arXiv Detail & Related papers (2022-03-14T01:54:38Z) - Unsupervised Image Registration Towards Enhancing Performance and
Explainability in Cardiac And Brain Image Analysis [3.5718941645696485]
Inter- and intra-modality affine and non-rigid image registration is an essential medical image analysis process in clinical imaging.
We present an un-supervised deep learning registration methodology which can accurately model affine and non-rigid trans-formations.
Our methodology performs bi-directional cross-modality image synthesis to learn modality-invariant latent rep-resentations.
arXiv Detail & Related papers (2022-03-07T12:54:33Z) - SAME: Deformable Image Registration based on Self-supervised Anatomical
Embeddings [16.38383865408585]
This work is built on a recent algorithm SAM, which is capable of computing dense anatomical/semantic correspondences between two images at the pixel level.
Our method is named SAME, which breaks down image registration into three steps: affine transformation, coarse deformation, and deep deformable registration.
arXiv Detail & Related papers (2021-09-23T18:03:11Z) - Relational Graph Learning on Visual and Kinematics Embeddings for
Accurate Gesture Recognition in Robotic Surgery [84.73764603474413]
We propose a novel online approach of multi-modal graph network (i.e., MRG-Net) to dynamically integrate visual and kinematics information.
The effectiveness of our method is demonstrated with state-of-the-art results on the public JIGSAWS dataset.
arXiv Detail & Related papers (2020-11-03T11:00:10Z) - JSSR: A Joint Synthesis, Segmentation, and Registration System for 3D
Multi-Modal Image Alignment of Large-scale Pathological CT Scans [27.180136688977512]
We propose a novel multi-task learning system, JSSR, based on an end-to-end 3D convolutional neural network.
The system is optimized to satisfy the implicit constraints between different tasks in an unsupervised manner.
It consistently outperforms conventional state-of-the-art multi-modal registration methods.
arXiv Detail & Related papers (2020-05-25T16:30:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.