2D/3D Deep Image Registration by Learning 3D Displacement Fields for
Abdominal Organs
- URL: http://arxiv.org/abs/2212.05445v1
- Date: Sun, 11 Dec 2022 08:36:23 GMT
- Title: 2D/3D Deep Image Registration by Learning 3D Displacement Fields for
Abdominal Organs
- Authors: Ryuto Miura, Megumi Nakao, Mitsuhiro Nakamura, and Tetsuya Matsuda
- Abstract summary: We propose a supervised deep learning framework that achieves 2D/3D deformable image registration between 3D volumes and single-viewpoint 2D projected images.
The proposed method learns the translation from the target 2D projection images and the initial 3D volume to 3D displacement fields.
- Score: 1.9949261242626626
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deformable registration of two-dimensional/three-dimensional (2D/3D) images
of abdominal organs is a complicated task because the abdominal organs deform
significantly and their contours are not detected in two-dimensional X-ray
images. We propose a supervised deep learning framework that achieves 2D/3D
deformable image registration between 3D volumes and single-viewpoint 2D
projected images. The proposed method learns the translation from the target 2D
projection images and the initial 3D volume to 3D displacement fields. In
experiments, we registered 3D-computed tomography (CT) volumes to digitally
reconstructed radiographs generated from abdominal 4D-CT volumes. For
validation, we used 4D-CT volumes of 35 cases and confirmed that the 3D-CT
volumes reflecting the nonlinear and local respiratory organ displacement were
reconstructed. The proposed method demonstrate the compatible performance to
the conventional methods with a dice similarity coefficient of 91.6 \% for the
liver region and 85.9 \% for the stomach region, while estimating a
significantly more accurate CT values.
Related papers
- Rigid Single-Slice-in-Volume registration via rotation-equivariant 2D/3D feature matching [3.041742847777409]
We propose a self-supervised 2D/3D registration approach to match a single 2D slice to the corresponding 3D volume.
Results demonstrate the robustness of the proposed slice-in-volume registration on the NSCLC-Radiomics CT and KIRBY21 MRI datasets.
arXiv Detail & Related papers (2024-10-24T12:24:27Z) - Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction [53.93674177236367]
Cone Beam Computed Tomography (CBCT) plays a vital role in clinical imaging.
Traditional methods typically require hundreds of 2D X-ray projections to reconstruct a high-quality 3D CBCT image.
This has led to a growing interest in sparse-view CBCT reconstruction to reduce radiation doses.
We introduce a novel geometry-aware encoder-decoder framework to solve this problem.
arXiv Detail & Related papers (2023-03-26T14:38:42Z) - Improving 3D Imaging with Pre-Trained Perpendicular 2D Diffusion Models [52.529394863331326]
We propose a novel approach using two perpendicular pre-trained 2D diffusion models to solve the 3D inverse problem.
Our method is highly effective for 3D medical image reconstruction tasks, including MRI Z-axis super-resolution, compressed sensing MRI, and sparse-view CT.
arXiv Detail & Related papers (2023-03-15T08:28:06Z) - CNN-based real-time 2D-3D deformable registration from a single X-ray
projection [2.1198879079315573]
This paper presents a method for real-time 2D-3D non-rigid registration using a single fluoroscopic image.
A dataset composed of displacement fields and 2D projections of the anatomy is generated from a preoperative scan.
A neural network is trained to recover the unknown 3D displacement field from a single projection image.
arXiv Detail & Related papers (2022-12-15T09:57:19Z) - RiCS: A 2D Self-Occlusion Map for Harmonizing Volumetric Objects [68.85305626324694]
Ray-marching in Camera Space (RiCS) is a new method to represent the self-occlusions of foreground objects in 3D into a 2D self-occlusion map.
We show that our representation map not only allows us to enhance the image quality but also to model temporally coherent complex shadow effects.
arXiv Detail & Related papers (2022-05-14T05:35:35Z) - IGCN: Image-to-graph Convolutional Network for 2D/3D Deformable
Registration [1.2246649738388387]
We propose an image-to-graph convolutional network that achieves deformable registration of a 3D organ mesh for a single-viewpoint 2D projection image.
We show shape prediction considering relationships among multiple organs can be used to predict respiratory motion and deformation from radiographs with clinically acceptable accuracy.
arXiv Detail & Related papers (2021-10-31T12:48:37Z) - 3D Reconstruction of Curvilinear Structures with Stereo Matching
DeepConvolutional Neural Networks [52.710012864395246]
We propose a fully automated pipeline for both detection and matching of curvilinear structures in stereo pairs.
We mainly focus on 3D reconstruction of dislocations from stereo pairs of TEM images.
arXiv Detail & Related papers (2021-10-14T23:05:47Z) - 3D-to-2D Distillation for Indoor Scene Parsing [78.36781565047656]
We present a new approach that enables us to leverage 3D features extracted from large-scale 3D data repository to enhance 2D features extracted from RGB images.
First, we distill 3D knowledge from a pretrained 3D network to supervise a 2D network to learn simulated 3D features from 2D features during the training.
Second, we design a two-stage dimension normalization scheme to calibrate the 2D and 3D features for better integration.
Third, we design a semantic-aware adversarial training model to extend our framework for training with unpaired 3D data.
arXiv Detail & Related papers (2021-04-06T02:22:24Z) - Comparative Evaluation of 3D and 2D Deep Learning Techniques for
Semantic Segmentation in CT Scans [0.0]
We propose a 3D stack-based deep learning technique for segmenting manifestations of consolidation and ground-glass opacities in 3D Computed Tomography (CT) scans.
We present a comparison based on the segmentation results, the contextual information retained, and the inference time between this 3D technique and a traditional 2D deep learning technique.
The 3D technique results in a 5X reduction in the inference time compared to the 2D technique.
arXiv Detail & Related papers (2021-01-19T13:23:43Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.