IGCN: Image-to-graph Convolutional Network for 2D/3D Deformable
Registration
- URL: http://arxiv.org/abs/2111.00484v1
- Date: Sun, 31 Oct 2021 12:48:37 GMT
- Title: IGCN: Image-to-graph Convolutional Network for 2D/3D Deformable
Registration
- Authors: Megumi Nakao, Mitsuhiro Nakamura, Tetsuya Matsuda
- Abstract summary: We propose an image-to-graph convolutional network that achieves deformable registration of a 3D organ mesh for a single-viewpoint 2D projection image.
We show shape prediction considering relationships among multiple organs can be used to predict respiratory motion and deformation from radiographs with clinically acceptable accuracy.
- Score: 1.2246649738388387
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Organ shape reconstruction based on a single-projection image during
treatment has wide clinical scope, e.g., in image-guided radiotherapy and
surgical guidance. We propose an image-to-graph convolutional network that
achieves deformable registration of a 3D organ mesh for a single-viewpoint 2D
projection image. This framework enables simultaneous training of two types of
transformation: from the 2D projection image to a displacement map, and from
the sampled per-vertex feature to a 3D displacement that satisfies the
geometrical constraint of the mesh structure. Assuming application to radiation
therapy, the 2D/3D deformable registration performance is verified for multiple
abdominal organs that have not been targeted to date, i.e., the liver, stomach,
duodenum, and kidney, and for pancreatic cancer. The experimental results show
shape prediction considering relationships among multiple organs can be used to
predict respiratory motion and deformation from digitally reconstructed
radiographs with clinically acceptable accuracy.
Related papers
- On the Localization of Ultrasound Image Slices within Point Distribution
Models [84.27083443424408]
Thyroid disorders are most commonly diagnosed using high-resolution Ultrasound (US)
Longitudinal tracking is a pivotal diagnostic protocol for monitoring changes in pathological thyroid morphology.
We present a framework for automated US image slice localization within a 3D shape representation.
arXiv Detail & Related papers (2023-09-01T10:10:46Z) - Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction [53.93674177236367]
Cone Beam Computed Tomography (CBCT) plays a vital role in clinical imaging.
Traditional methods typically require hundreds of 2D X-ray projections to reconstruct a high-quality 3D CBCT image.
This has led to a growing interest in sparse-view CBCT reconstruction to reduce radiation doses.
We introduce a novel geometry-aware encoder-decoder framework to solve this problem.
arXiv Detail & Related papers (2023-03-26T14:38:42Z) - CNN-based real-time 2D-3D deformable registration from a single X-ray
projection [2.1198879079315573]
This paper presents a method for real-time 2D-3D non-rigid registration using a single fluoroscopic image.
A dataset composed of displacement fields and 2D projections of the anatomy is generated from a preoperative scan.
A neural network is trained to recover the unknown 3D displacement field from a single projection image.
arXiv Detail & Related papers (2022-12-15T09:57:19Z) - 2D/3D Deep Image Registration by Learning 3D Displacement Fields for
Abdominal Organs [1.9949261242626626]
We propose a supervised deep learning framework that achieves 2D/3D deformable image registration between 3D volumes and single-viewpoint 2D projected images.
The proposed method learns the translation from the target 2D projection images and the initial 3D volume to 3D displacement fields.
arXiv Detail & Related papers (2022-12-11T08:36:23Z) - View-Disentangled Transformer for Brain Lesion Detection [50.4918615815066]
We propose a novel view-disentangled transformer to enhance the extraction of MRI features for more accurate tumour detection.
First, the proposed transformer harvests long-range correlation among different positions in a 3D brain scan.
Second, the transformer models a stack of slice features as multiple 2D views and enhance these features view-by-view.
Third, we deploy the proposed transformer module in a transformer backbone, which can effectively detect the 2D regions surrounding brain lesions.
arXiv Detail & Related papers (2022-09-20T11:58:23Z) - 3D Reconstruction of Curvilinear Structures with Stereo Matching
DeepConvolutional Neural Networks [52.710012864395246]
We propose a fully automated pipeline for both detection and matching of curvilinear structures in stereo pairs.
We mainly focus on 3D reconstruction of dislocations from stereo pairs of TEM images.
arXiv Detail & Related papers (2021-10-14T23:05:47Z) - Image-to-Graph Convolutional Network for Deformable Shape Reconstruction
from a Single Projection Image [0.0]
We propose an image-to-graph convolutional network (IGCN) for deformable shape reconstruction from a single-viewpoint projection image.
The IGCN learns relationship between shape/deformation variability and the deep image features based on a deformation mapping scheme.
arXiv Detail & Related papers (2021-08-28T00:00:09Z) - The entire network structure of Crossmodal Transformer [4.605531191013731]
The proposed approach first deep learns skeletal features from 2D X-ray and 3D CT images.
As a result, the well-trained network can directly predict the spatial correspondence between arbitrary 2D X-ray and 3D CT.
arXiv Detail & Related papers (2021-04-29T11:47:31Z) - Deep Learning compatible Differentiable X-ray Projections for Inverse
Rendering [8.926091372824942]
We propose a differentiable by deriving the distance travelled by a ray inside mesh structures to generate a distance map.
We show its application by solving the inverse problem, namely reconstructing 3D models from real 2D fluoroscopy images of the pelvis.
arXiv Detail & Related papers (2021-02-04T22:06:05Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Tattoo tomography: Freehand 3D photoacoustic image reconstruction with
an optical pattern [49.240017254888336]
Photoacoustic tomography (PAT) is a novel imaging technique that can resolve both morphological and functional tissue properties.
A current drawback is the limited field-of-view provided by the conventionally applied 2D probes.
We present a novel approach to 3D reconstruction of PAT data that does not require an external tracking system.
arXiv Detail & Related papers (2020-11-10T09:27:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.