Inferring the 3D Standing Spine Posture from 2D Radiographs
- URL: http://arxiv.org/abs/2007.06612v2
- Date: Wed, 13 Jan 2021 15:15:38 GMT
- Title: Inferring the 3D Standing Spine Posture from 2D Radiographs
- Authors: Amirhossein Bayat, Anjany Sekuboyina, Johannes C. Paetzold, Christian
Payer, Darko Stern, Martin Urschler, Jan S. Kirschke, Bjoern H. Menze
- Abstract summary: An upright spinal pose (i.e. standing) under natural weight bearing is crucial for such bio-mechanical analysis.
We propose a novel neural network architecture working vertebra-wise, termed emphTransVert, which takes 2D radiographs and infers the spine's 3D posture.
- Score: 5.114998342130747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The treatment of degenerative spinal disorders requires an understanding of
the individual spinal anatomy and curvature in 3D. An upright spinal pose (i.e.
standing) under natural weight bearing is crucial for such bio-mechanical
analysis. 3D volumetric imaging modalities (e.g. CT and MRI) are performed in
patients lying down. On the other hand, radiographs are captured in an upright
pose, but result in 2D projections. This work aims to integrate the two realms,
i.e. it combines the upright spinal curvature from radiographs with the 3D
vertebral shape from CT imaging for synthesizing an upright 3D model of spine,
loaded naturally. Specifically, we propose a novel neural network architecture
working vertebra-wise, termed \emph{TransVert}, which takes orthogonal 2D
radiographs and infers the spine's 3D posture. We validate our architecture on
digitally reconstructed radiographs, achieving a 3D reconstruction Dice of
$95.52\%$, indicating an almost perfect 2D-to-3D domain translation. Deploying
our model on clinical radiographs, we successfully synthesise full-3D, upright,
patient-specific spine models for the first time.
Related papers
- MedTet: An Online Motion Model for 4D Heart Reconstruction [59.74234226055964]
We present a novel approach to reconstruction of 3D cardiac motion from sparse intraoperative data.
Existing methods can accurately reconstruct 3D organ geometries from full 3D volumetric imaging.
We propose a versatile framework for reconstructing 3D motion from such partial data.
arXiv Detail & Related papers (2024-12-03T17:18:33Z) - 3D Spine Shape Estimation from Single 2D DXA [49.53978253009771]
We propose an automated framework to estimate the 3D spine shape from 2D DXA scans.
We achieve this by explicitly predicting the sagittal view of the spine from the DXA scan.
arXiv Detail & Related papers (2024-12-02T13:58:26Z) - SurgPointTransformer: Vertebrae Shape Completion with RGB-D Data [0.0]
This study introduces an alternative, radiation-free approach for reconstructing the 3D spine anatomy using RGB-D data.
We introduce SurgPointTransformer, a shape completion approach for surgical applications that can accurately reconstruct the unexposed spine regions from sparse observations of the exposed surface.
Our method significantly outperforms the state-of-the-art baselines, achieving an average Chamfer Distance of 5.39, an F-Score of 0.85, an Earth Mover's Distance of 0.011, and a Signal-to-Noise Ratio of 22.90 dB.
arXiv Detail & Related papers (2024-10-02T11:53:28Z) - On the Localization of Ultrasound Image Slices within Point Distribution
Models [84.27083443424408]
Thyroid disorders are most commonly diagnosed using high-resolution Ultrasound (US)
Longitudinal tracking is a pivotal diagnostic protocol for monitoring changes in pathological thyroid morphology.
We present a framework for automated US image slice localization within a 3D shape representation.
arXiv Detail & Related papers (2023-09-01T10:10:46Z) - Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction [53.93674177236367]
Cone Beam Computed Tomography (CBCT) plays a vital role in clinical imaging.
Traditional methods typically require hundreds of 2D X-ray projections to reconstruct a high-quality 3D CBCT image.
This has led to a growing interest in sparse-view CBCT reconstruction to reduce radiation doses.
We introduce a novel geometry-aware encoder-decoder framework to solve this problem.
arXiv Detail & Related papers (2023-03-26T14:38:42Z) - CNN-based real-time 2D-3D deformable registration from a single X-ray
projection [2.1198879079315573]
This paper presents a method for real-time 2D-3D non-rigid registration using a single fluoroscopic image.
A dataset composed of displacement fields and 2D projections of the anatomy is generated from a preoperative scan.
A neural network is trained to recover the unknown 3D displacement field from a single projection image.
arXiv Detail & Related papers (2022-12-15T09:57:19Z) - IGCN: Image-to-graph Convolutional Network for 2D/3D Deformable
Registration [1.2246649738388387]
We propose an image-to-graph convolutional network that achieves deformable registration of a 3D organ mesh for a single-viewpoint 2D projection image.
We show shape prediction considering relationships among multiple organs can be used to predict respiratory motion and deformation from radiographs with clinically acceptable accuracy.
arXiv Detail & Related papers (2021-10-31T12:48:37Z) - 3D Reconstruction of Curvilinear Structures with Stereo Matching
DeepConvolutional Neural Networks [52.710012864395246]
We propose a fully automated pipeline for both detection and matching of curvilinear structures in stereo pairs.
We mainly focus on 3D reconstruction of dislocations from stereo pairs of TEM images.
arXiv Detail & Related papers (2021-10-14T23:05:47Z) - XraySyn: Realistic View Synthesis From a Single Radiograph Through CT
Priors [118.27130593216096]
A radiograph visualizes the internal anatomy of a patient through the use of X-ray, which projects 3D information onto a 2D plane.
To the best of our knowledge, this is the first work on radiograph view synthesis.
We show that by gaining an understanding of radiography in 3D space, our method can be applied to radiograph bone extraction and suppression without groundtruth bone labels.
arXiv Detail & Related papers (2020-12-04T05:08:53Z) - End-To-End Convolutional Neural Network for 3D Reconstruction of Knee
Bones From Bi-Planar X-Ray Images [6.645111950779666]
We present an end-to-end Convolutional Neural Network (CNN) approach for 3D reconstruction of knee bones directly from two bi-planar X-ray images.
arXiv Detail & Related papers (2020-04-02T08:37:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.