Cross-Modality Image Registration using a Training-Time Privileged Third
Modality
- URL: http://arxiv.org/abs/2207.12901v1
- Date: Tue, 26 Jul 2022 13:50:30 GMT
- Title: Cross-Modality Image Registration using a Training-Time Privileged Third
Modality
- Authors: Qianye Yang, David Atkinson, Yunguan Fu, Tom Syer, Wen Yan, Shonit
Punwani, Matthew J. Clarkson, Dean C. Barratt, Tom Vercauteren, Yipeng Hu
- Abstract summary: We propose a learning from privileged modality algorithm to support the challenging multi-modality registration problems.
We present experimental results based on 369 sets of 3D multiparametric MRI images from 356 prostate cancer patients.
- Score: 5.78335050301421
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we consider the task of pairwise cross-modality image
registration, which may benefit from exploiting additional images available
only at training time from an additional modality that is different to those
being registered. As an example, we focus on aligning intra-subject
multiparametric Magnetic Resonance (mpMR) images, between T2-weighted (T2w)
scans and diffusion-weighted scans with high b-value (DWI$_{high-b}$). For the
application of localising tumours in mpMR images, diffusion scans with zero
b-value (DWI$_{b=0}$) are considered easier to register to T2w due to the
availability of corresponding features. We propose a learning from privileged
modality algorithm, using a training-only imaging modality DWI$_{b=0}$, to
support the challenging multi-modality registration problems. We present
experimental results based on 369 sets of 3D multiparametric MRI images from
356 prostate cancer patients and report, with statistical significance, a
lowered median target registration error of 4.34 mm, when registering the
holdout DWI$_{high-b}$ and T2w image pairs, compared with that of 7.96 mm
before registration. Results also show that the proposed learning-based
registration networks enabled efficient registration with comparable or better
accuracy, compared with a classical iterative algorithm and other tested
learning-based methods with/without the additional modality. These compared
algorithms also failed to produce any significantly improved alignment between
DWI$_{high-b}$ and T2w in this challenging application.
Related papers
- Recurrence With Correlation Network for Medical Image Registration [66.63200823918429]
We present Recurrence with Correlation Network (RWCNet), a medical image registration network with multi-scale features and a cost volume layer.
We demonstrate that these architectural features improve medical image registration accuracy in two image registration datasets.
arXiv Detail & Related papers (2023-02-05T02:41:46Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - Modality-Aware Triplet Hard Mining for Zero-shot Sketch-Based Image
Retrieval [51.42470171051007]
This paper tackles the Zero-Shot Sketch-Based Image Retrieval (ZS-SBIR) problem from the viewpoint of cross-modality metric learning.
By combining two fundamental learning approaches in DML, e.g., classification training and pairwise training, we set up a strong baseline for ZS-SBIR.
We show that Modality-Aware Triplet Hard Mining (MATHM) enhances the baseline with three types of pairwise learning.
arXiv Detail & Related papers (2021-12-15T08:36:44Z) - SAME: Deformable Image Registration based on Self-supervised Anatomical
Embeddings [16.38383865408585]
This work is built on a recent algorithm SAM, which is capable of computing dense anatomical/semantic correspondences between two images at the pixel level.
Our method is named SAME, which breaks down image registration into three steps: affine transformation, coarse deformation, and deep deformable registration.
arXiv Detail & Related papers (2021-09-23T18:03:11Z) - Automatic Landmarks Correspondence Detection in Medical Images with an
Application to Deformable Image Registration [0.0]
DCNN-Match learns to predict landmark correspondences in 3D images in a self-supervised manner.
Results show significant improvement in DIR performance when landmark correspondences predicted by DCNN-Match were used in case of simulated as well as clinical deformations.
arXiv Detail & Related papers (2021-09-06T20:16:27Z) - Self-Supervised Multi-Modal Alignment for Whole Body Medical Imaging [70.52819168140113]
We use a dataset of over 20,000 subjects from the UK Biobank with both whole body Dixon technique magnetic resonance (MR) scans and also dual-energy x-ray absorptiometry (DXA) scans.
We introduce a multi-modal image-matching contrastive framework, that is able to learn to match different-modality scans of the same subject with high accuracy.
Without any adaption, we show that the correspondences learnt during this contrastive training step can be used to perform automatic cross-modal scan registration.
arXiv Detail & Related papers (2021-07-14T12:35:05Z) - End-to-end Ultrasound Frame to Volume Registration [9.738024231762465]
We propose an end-to-end frame-to-volume registration network (FVR-Net) for 2D and 3D registration.
Our model shows superior efficiency for real-time interventional guidance with highly competitive registration accuracy.
arXiv Detail & Related papers (2021-07-14T01:59:42Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - F3RNet: Full-Resolution Residual Registration Network for Deformable
Image Registration [21.99118499516863]
Deformable image registration (DIR) is essential for many image-guided therapies.
We propose a novel unsupervised registration network, namely the Full-Resolution Residual Registration Network (F3RNet)
One stream takes advantage of the full-resolution information that facilitates accurate voxel-level registration.
The other stream learns the deep multi-scale residual representations to obtain robust recognition.
arXiv Detail & Related papers (2020-09-15T15:05:54Z) - Deep Q-Network-Driven Catheter Segmentation in 3D US by Hybrid
Constrained Semi-Supervised Learning and Dual-UNet [74.22397862400177]
We propose a novel catheter segmentation approach, which requests fewer annotations than the supervised learning method.
Our scheme considers a deep Q learning as the pre-localization step, which avoids voxel-level annotation.
With the detected catheter, patch-based Dual-UNet is applied to segment the catheter in 3D volumetric data.
arXiv Detail & Related papers (2020-06-25T21:10:04Z) - Groupwise Multimodal Image Registration using Joint Total Variation [0.0]
We introduce a cost function based on joint total variation for such multimodal image registration.
We evaluate our algorithm on rigidly aligning both simulated and real 3D brain scans.
arXiv Detail & Related papers (2020-05-06T16:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.