Self-Supervised 2D/3D Registration for X-Ray to CT Image Fusion
- URL: http://arxiv.org/abs/2210.07611v1
- Date: Fri, 14 Oct 2022 08:06:57 GMT
- Title: Self-Supervised 2D/3D Registration for X-Ray to CT Image Fusion
- Authors: Srikrishna Jaganathan, Maximilian Kukla, Jian Wang, Karthik Shetty,
Andreas Maier
- Abstract summary: We propose a self-supervised 2D/3D registration framework combining simulated training with unsupervised feature and pixel space domain adaptation.
Our framework achieves a registration accuracy of 1.83$pm$1.16 mm with a high success ratio of 90.1% on real X-ray images.
- Score: 10.040271638205382
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Learning-based 2D/3D registration enables fast, robust, and accurate
X-ray to CT image fusion when large annotated paired datasets are available for
training. However, the need for paired CT volume and X-ray images with ground
truth registration limits the applicability in interventional scenarios. An
alternative is to use simulated X-ray projections from CT volumes, thus
removing the need for paired annotated datasets. Deep Neural Networks trained
exclusively on simulated X-ray projections can perform significantly worse on
real X-ray images due to the domain gap. We propose a self-supervised 2D/3D
registration framework combining simulated training with unsupervised feature
and pixel space domain adaptation to overcome the domain gap and eliminate the
need for paired annotated datasets. Our framework achieves a registration
accuracy of 1.83$\pm$1.16 mm with a high success ratio of 90.1% on real X-ray
images showing a 23.9% increase in success ratio compared to reference
annotation-free algorithms.
Related papers
- RayEmb: Arbitrary Landmark Detection in X-Ray Images Using Ray Embedding Subspace [0.7937206070844555]
Intra-operative 2D-3D registration of X-ray images with pre-operatively acquired CT scans is a crucial procedure in orthopedic surgeries.
We propose a novel method to address this issue by detecting arbitrary landmark points in X-ray images.
arXiv Detail & Related papers (2024-10-10T17:36:21Z) - Multi-view X-ray Image Synthesis with Multiple Domain Disentanglement from CT Scans [10.72672892416061]
Over-dosed X-rays superimpose potential risks to human health to some extent.
Data-driven algorithms from volume scans to X-ray images are restricted by the scarcity of paired X-ray and volume data.
We propose CT2X-GAN to synthesize the X-ray images in an end-to-end manner using the content and style disentanglement from three different image domains.
arXiv Detail & Related papers (2024-04-18T04:25:56Z) - Radiative Gaussian Splatting for Efficient X-ray Novel View Synthesis [88.86777314004044]
We propose a 3D Gaussian splatting-based framework, namely X-Gaussian, for X-ray novel view visualization.
Experiments show that our X-Gaussian outperforms state-of-the-art methods by 6.5 dB while enjoying less than 15% training time and over 73x inference speed.
arXiv Detail & Related papers (2024-03-07T00:12:08Z) - X-Ray to CT Rigid Registration Using Scene Coordinate Regression [1.1687067206676627]
This paper proposes a fully automatic registration method that is robust to extreme viewpoints.
It is based on a fully convolutional neural network (CNN) that regresses the overlapping coordinates for a given X-ray image.
The proposed method achieved an average mean target registration error (mTRE) of 3.79 mm in the 50th percentile of the simulated test dataset and projected mTRE of 9.65 mm in the 50th percentile of real fluoroscopic images for pelvis registration.
arXiv Detail & Related papers (2023-11-25T17:48:46Z) - Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction [53.93674177236367]
Cone Beam Computed Tomography (CBCT) plays a vital role in clinical imaging.
Traditional methods typically require hundreds of 2D X-ray projections to reconstruct a high-quality 3D CBCT image.
This has led to a growing interest in sparse-view CBCT reconstruction to reduce radiation doses.
We introduce a novel geometry-aware encoder-decoder framework to solve this problem.
arXiv Detail & Related papers (2023-03-26T14:38:42Z) - Transfer Learning from an Artificial Radiograph-landmark Dataset for
Registration of the Anatomic Skull Model to Dual Fluoroscopic X-ray Images [0.4205692673448206]
We propose a transfer learning strategy for 3D-to-2D registration using deep neural networks trained from an artificial dataset.
Digitally reconstructed radiographs (DRRs) and radiographic skull landmarks were automatically created from craniocervical CT data of a female subject.
They were used to train a residual network (ResNet) for landmark detection and a cycle generative adversarial network (GAN) to eliminate the style difference between DRRs and actual X-rays.
The methodology to strategically augment artificial training data can tackle the complicated skull registration scenario, and has potentials to extend to widespread registration scenarios.
arXiv Detail & Related papers (2021-08-14T04:49:36Z) - Automated Model Design and Benchmarking of 3D Deep Learning Models for
COVID-19 Detection with Chest CT Scans [72.04652116817238]
We propose a differentiable neural architecture search (DNAS) framework to automatically search for the 3D DL models for 3D chest CT scans classification.
We also exploit the Class Activation Mapping (CAM) technique on our models to provide the interpretability of the results.
arXiv Detail & Related papers (2021-01-14T03:45:01Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - XraySyn: Realistic View Synthesis From a Single Radiograph Through CT
Priors [118.27130593216096]
A radiograph visualizes the internal anatomy of a patient through the use of X-ray, which projects 3D information onto a 2D plane.
To the best of our knowledge, this is the first work on radiograph view synthesis.
We show that by gaining an understanding of radiography in 3D space, our method can be applied to radiograph bone extraction and suppression without groundtruth bone labels.
arXiv Detail & Related papers (2020-12-04T05:08:53Z) - Pose-dependent weights and Domain Randomization for fully automatic
X-ray to CT Registration [51.280096834264256]
Fully automatic X-ray to CT registration requires an initial alignment within the capture range of existing intensity-based registrations.
This work provides a novel automatic initialization, which enables end to end registration.
The mean (+-standard deviation) target registration error in millimetres is 4.1 +- 4.3 for simulated X-rays with a success rate of 92% and 4.2 +- 3.9 for real X-rays with a success rate of 86.8%, where a success is defined as a translation error of less than 30mm.
arXiv Detail & Related papers (2020-11-14T12:50:32Z) - 3D Probabilistic Segmentation and Volumetry from 2D projection images [10.32519161805588]
X-Ray imaging is quick, cheap and useful for front-line care assessment and intra-operative real-time imaging.
It suffers from projective information loss and lacks vital information on which many essential diagnostic biomarkers are based.
In this paper we explore probabilistic methods to reconstruct 3D volumetric images from 2D imaging modalities.
arXiv Detail & Related papers (2020-06-23T08:00:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.