Global Multi-modal 2D/3D Registration via Local Descriptors Learning
- URL: http://arxiv.org/abs/2205.03439v1
- Date: Fri, 6 May 2022 18:24:19 GMT
- Title: Global Multi-modal 2D/3D Registration via Local Descriptors Learning
- Authors: Viktoria Markova, Matteo Ronchetti, Wolfgang Wein, Oliver Zettinig and
Raphael Prevost
- Abstract summary: We present a novel approach to solve the problem of registration of an ultrasound sweep to a pre-operative image.
We learn dense keypoint descriptors from which we then estimate the registration.
Our approach is evaluated on a clinical dataset of paired MR volumes and ultrasound sequences.
- Score: 0.3299877799532224
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-modal registration is a required step for many image-guided procedures,
especially ultrasound-guided interventions that require anatomical context.
While a number of such registration algorithms are already available, they all
require a good initialization to succeed due to the challenging appearance of
ultrasound images and the arbitrary coordinate system they are acquired in. In
this paper, we present a novel approach to solve the problem of registration of
an ultrasound sweep to a pre-operative image. We learn dense keypoint
descriptors from which we then estimate the registration. We show that our
method overcomes the challenges inherent to registration tasks with freehand
ultrasound sweeps, namely, the multi-modality and multidimensionality of the
data in addition to lack of precise ground truth and low amounts of training
examples. We derive a registration method that is fast, generic, fully
automatic, does not require any initialization and can naturally generate
visualizations aiding interpretability and explainability. Our approach is
evaluated on a clinical dataset of paired MR volumes and ultrasound sequences.
Related papers
- Ultrasound Signal Processing: From Models to Deep Learning [64.56774869055826]
Medical ultrasound imaging relies heavily on high-quality signal processing to provide reliable and interpretable image reconstructions.
Deep learning based methods, which are optimized in a data-driven fashion, have gained popularity.
A relatively new paradigm combines the power of the two: leveraging data-driven deep learning, as well as exploiting domain knowledge.
arXiv Detail & Related papers (2022-04-09T13:04:36Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Deep Learning for Ultrasound Beamforming [120.12255978513912]
Beamforming, the process of mapping received ultrasound echoes to the spatial image domain, lies at the heart of the ultrasound image formation chain.
Modern ultrasound imaging leans heavily on innovations in powerful digital receive channel processing.
Deep learning methods can play a compelling role in the digital beamforming pipeline.
arXiv Detail & Related papers (2021-09-23T15:15:21Z) - Deep learning facilitates fully automated brain image registration of
optoacoustic tomography and magnetic resonance imaging [6.9975936496083495]
Multi-spectral optoacoustic tomography (MSOT) is an emerging optical imaging method providing multiplex molecular and functional information from the rodent brain.
It can be greatly augmented by magnetic resonance imaging (MRI) that offers excellent soft-tissue contrast and high-resolution brain anatomy.
registration of multi-modal images remains challenging, chiefly due to the entirely different image contrast rendered by these modalities.
Here we propose a fully automated registration method for MSOT-MRI multimodal imaging empowered by deep learning.
arXiv Detail & Related papers (2021-09-04T14:50:44Z) - A Deep Discontinuity-Preserving Image Registration Network [73.03885837923599]
Most deep learning-based registration methods assume that the desired deformation fields are globally smooth and continuous.
We propose a weakly-supervised Deep Discontinuity-preserving Image Registration network (DDIR) to obtain better registration performance and realistic deformation fields.
We demonstrate that our method achieves significant improvements in registration accuracy and predicts more realistic deformations, in registration experiments on cardiac magnetic resonance (MR) images.
arXiv Detail & Related papers (2021-07-09T13:35:59Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Representing Ambiguity in Registration Problems with Conditional
Invertible Neural Networks [28.81229531636232]
In this paper, we explore the application of invertible neural networks (INNs) as core component of a registration methodology.
In a first feasibility study, we test the approach for a 2D 3D registration setting by registering spinal CT volumes to X-ray images.
arXiv Detail & Related papers (2020-12-15T10:28:41Z) - F3RNet: Full-Resolution Residual Registration Network for Deformable
Image Registration [21.99118499516863]
Deformable image registration (DIR) is essential for many image-guided therapies.
We propose a novel unsupervised registration network, namely the Full-Resolution Residual Registration Network (F3RNet)
One stream takes advantage of the full-resolution information that facilitates accurate voxel-level registration.
The other stream learns the deep multi-scale residual representations to obtain robust recognition.
arXiv Detail & Related papers (2020-09-15T15:05:54Z) - Deep Q-Network-Driven Catheter Segmentation in 3D US by Hybrid
Constrained Semi-Supervised Learning and Dual-UNet [74.22397862400177]
We propose a novel catheter segmentation approach, which requests fewer annotations than the supervised learning method.
Our scheme considers a deep Q learning as the pre-localization step, which avoids voxel-level annotation.
With the detected catheter, patch-based Dual-UNet is applied to segment the catheter in 3D volumetric data.
arXiv Detail & Related papers (2020-06-25T21:10:04Z) - JSSR: A Joint Synthesis, Segmentation, and Registration System for 3D
Multi-Modal Image Alignment of Large-scale Pathological CT Scans [27.180136688977512]
We propose a novel multi-task learning system, JSSR, based on an end-to-end 3D convolutional neural network.
The system is optimized to satisfy the implicit constraints between different tasks in an unsupervised manner.
It consistently outperforms conventional state-of-the-art multi-modal registration methods.
arXiv Detail & Related papers (2020-05-25T16:30:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.