Learning to Match 2D Keypoints Across Preoperative MR and Intraoperative Ultrasound
- URL: http://arxiv.org/abs/2409.08169v1
- Date: Thu, 12 Sep 2024 16:00:22 GMT
- Title: Learning to Match 2D Keypoints Across Preoperative MR and Intraoperative Ultrasound
- Authors: Hassan Rasheed, Reuben Dorent, Maximilian Fehrentz, Tina Kapur, William M. Wells III, Alexandra Golby, Sarah Frisken, Julia A. Schnabel, Nazim Haouchine,
- Abstract summary: We propose a texture-invariant 2D keypoints descriptor specifically designed for matching preoperative Magnetic Resonance (MR) images with intraoperative Ultrasound (US) images.
We build our training set by enforcing keypoints localization over all images then train a patient-specific descriptor network that learns texture-invariant discriminant features in a supervised contrastive manner.
Our experiments on real cases with ground truth show the effectiveness of the proposed approach, outperforming the state-of-the-art methods and achieving 80.35% matching precision on average.
- Score: 38.1299082729891
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose in this paper a texture-invariant 2D keypoints descriptor specifically designed for matching preoperative Magnetic Resonance (MR) images with intraoperative Ultrasound (US) images. We introduce a matching-by-synthesis strategy, where intraoperative US images are synthesized from MR images accounting for multiple MR modalities and intraoperative US variability. We build our training set by enforcing keypoints localization over all images then train a patient-specific descriptor network that learns texture-invariant discriminant features in a supervised contrastive manner, leading to robust keypoints descriptors. Our experiments on real cases with ground truth show the effectiveness of the proposed approach, outperforming the state-of-the-art methods and achieving 80.35% matching precision on average.
Related papers
- A 3D Cross-modal Keypoint Descriptor for MR-US Matching and Registration [0.053801353100098995]
Intraoperative registration of real-time ultrasound to preoperative Magnetic Resonance Imaging (MRI) remains an unsolved problem.<n>We propose a novel 3D cross-modal keypoint descriptor for MRI-iUS matching and registration.<n>Our approach employs a patient-specific matching-by-synthesis approach, generating synthetic iUS volumes from preoperative MRI.
arXiv Detail & Related papers (2025-07-24T16:19:08Z) - From Real Artifacts to Virtual Reference: A Robust Framework for Translating Endoscopic Images [27.230439605570812]
In endoscopic imaging, combining pre-operative data with intra-operative imaging is important for surgical planning and navigation.
Existing domain adaptation methods are hampered by distribution shift caused by in vivo artifacts.
This paper presents an artifact-resilient image translation method and an associated benchmark for this purpose.
arXiv Detail & Related papers (2024-10-15T02:41:52Z) - Towards General Text-guided Image Synthesis for Customized Multimodal Brain MRI Generation [51.28453192441364]
Multimodal brain magnetic resonance (MR) imaging is indispensable in neuroscience and neurology.
Current MR image synthesis approaches are typically trained on independent datasets for specific tasks.
We present TUMSyn, a Text-guided Universal MR image Synthesis model, which can flexibly generate brain MR images.
arXiv Detail & Related papers (2024-09-25T11:14:47Z) - Intraoperative Registration by Cross-Modal Inverse Neural Rendering [61.687068931599846]
We present a novel approach for 3D/2D intraoperative registration during neurosurgery via cross-modal inverse neural rendering.
Our approach separates implicit neural representation into two components, handling anatomical structure preoperatively and appearance intraoperatively.
We tested our method on retrospective patients' data from clinical cases, showing that our method outperforms state-of-the-art while meeting current clinical standards for registration.
arXiv Detail & Related papers (2024-09-18T13:40:59Z) - Unsupervised Multimodal 3D Medical Image Registration with Multilevel Correlation Balanced Optimization [22.633633605566214]
We propose an unsupervised multimodal medical image registration method based on multilevel correlation balanced optimization.
For preoperative medical images in different modalities, the alignment and stacking of valid information is achieved by the maximum fusion between deformation fields.
arXiv Detail & Related papers (2024-09-08T09:38:59Z) - Intra-video Positive Pairs in Self-Supervised Learning for Ultrasound [65.23740556896654]
Self-supervised learning (SSL) is one strategy for addressing the paucity of labelled data in medical imaging.
In this study, we investigated the effect of utilizing proximal, distinct images from the same B-mode ultrasound video as pairs for SSL.
Named Intra-Video Positive Pairs (IVPP), the method surpassed previous ultrasound-specific contrastive learning methods' average test accuracy on COVID-19 classification.
arXiv Detail & Related papers (2024-03-12T14:57:57Z) - K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality
Assessment [71.27193056354741]
The problem of how to assess cross-modality medical image synthesis has been largely unexplored.
We propose a new metric K-CROSS to spur progress on this challenging problem.
K-CROSS uses a pre-trained multi-modality segmentation network to predict the lesion location.
arXiv Detail & Related papers (2023-07-10T01:26:48Z) - The Brain Tumor Segmentation (BraTS) Challenge 2023: Brain MR Image Synthesis for Tumor Segmentation (BraSyn) [9.082208613256295]
We present the establishment of the Brain MR Image Synthesis Benchmark (BraSyn) in conjunction with the Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2023.
The primary objective of this challenge is to evaluate image synthesis methods that can realistically generate missing MRI modalities when multiple available images are provided.
arXiv Detail & Related papers (2023-05-15T20:49:58Z) - A Long Short-term Memory Based Recurrent Neural Network for
Interventional MRI Reconstruction [50.1787181309337]
We propose a convolutional long short-term memory (Conv-LSTM) based recurrent neural network (RNN), or ConvLR, to reconstruct interventional images with golden-angle radial sampling.
The proposed algorithm has the potential to achieve real-time i-MRI for DBS and can be used for general purpose MR-guided intervention.
arXiv Detail & Related papers (2022-03-28T14:03:45Z) - Unsupervised Image Registration Towards Enhancing Performance and
Explainability in Cardiac And Brain Image Analysis [3.5718941645696485]
Inter- and intra-modality affine and non-rigid image registration is an essential medical image analysis process in clinical imaging.
We present an un-supervised deep learning registration methodology which can accurately model affine and non-rigid trans-formations.
Our methodology performs bi-directional cross-modality image synthesis to learn modality-invariant latent rep-resentations.
arXiv Detail & Related papers (2022-03-07T12:54:33Z) - MR-Contrast-Aware Image-to-Image Translations with Generative
Adversarial Networks [5.3580471186206005]
We train an image-to-image generative adversarial network conditioned on the MR acquisition parameters repetition time and echo time.
Our approach yields a peak signal-to-noise ratio and structural similarity of 24.48 and 0.66, surpassing the pix2pix benchmark model significantly.
arXiv Detail & Related papers (2021-04-03T17:05:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.