3D Reconstruction and Alignment by Consumer RGB-D Sensors and Fiducial
Planar Markers for Patient Positioning in Radiation Therapy
- URL: http://arxiv.org/abs/2103.12162v1
- Date: Mon, 22 Mar 2021 20:20:59 GMT
- Title: 3D Reconstruction and Alignment by Consumer RGB-D Sensors and Fiducial
Planar Markers for Patient Positioning in Radiation Therapy
- Authors: Hamid Sarmadi, Rafael Mu\~noz-Salinas, M.\'Alvaro Berb\'is, Antonio
Luna, Rafael Medina-Carnicer
- Abstract summary: This paper proposes a fast and cheap patient positioning method based on inexpensive consumer level RGB-D sensors.
The proposed method relies on a 3D reconstruction approach that fuses, in real-time, artificial and natural visual landmarks recorded from a hand-held RGB-D sensor.
- Score: 1.7744342894757368
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: BACKGROUND AND OBJECTIVE: Patient positioning is a crucial step in radiation
therapy, for which non-invasive methods have been developed based on surface
reconstruction using optical 3D imaging. However, most solutions need expensive
specialized hardware and a careful calibration procedure that must be repeated
over time.This paper proposes a fast and cheap patient positioning method based
on inexpensive consumer level RGB-D sensors.
METHODS: The proposed method relies on a 3D reconstruction approach that
fuses, in real-time, artificial and natural visual landmarks recorded from a
hand-held RGB-D sensor. The video sequence is transformed into a set of
keyframes with known poses, that are later refined to obtain a realistic 3D
reconstruction of the patient. The use of artificial landmarks allows our
method to automatically align the reconstruction to a reference one, without
the need of calibrating the system with respect to the linear accelerator
coordinate system.
RESULTS:The experiments conducted show that our method obtains a median of 1
cm in translational error, and 1 degree of rotational error with respect to
reference pose. Additionally, the proposed method shows as visual output
overlayed poses (from the reference and the current scene) and an error map
that can be used to correct the patient's current pose to match the reference
pose.
CONCLUSIONS: A novel approach to obtain 3D body reconstructions for patient
positioning without requiring expensive hardware or dedicated graphic cards is
proposed. The method can be used to align in real time the patient's current
pose to a preview pose, which is a relevant step in radiation therapy.
Related papers
- SurgPointTransformer: Vertebrae Shape Completion with RGB-D Data [0.0]
This study introduces an alternative, radiation-free approach for reconstructing the 3D spine anatomy using RGB-D data.
We introduce SurgPointTransformer, a shape completion approach for surgical applications that can accurately reconstruct the unexposed spine regions from sparse observations of the exposed surface.
Our method significantly outperforms the state-of-the-art baselines, achieving an average Chamfer Distance of 5.39, an F-Score of 0.85, an Earth Mover's Distance of 0.011, and a Signal-to-Noise Ratio of 22.90 dB.
arXiv Detail & Related papers (2024-10-02T11:53:28Z) - SALVE: A 3D Reconstruction Benchmark of Wounds from Consumer-grade Videos [20.69257610322339]
This paper presents a study on 3D wound reconstruction from consumer-grade videos.
We introduce the SALVE dataset, comprising video recordings of realistic wound phantoms captured with different cameras.
We assess the accuracy and precision of state-of-the-art methods for 3D reconstruction, ranging from traditional photogrammetry pipelines to advanced neural rendering approaches.
arXiv Detail & Related papers (2024-07-29T02:34:51Z) - Intraoperative 2D/3D Image Registration via Differentiable X-ray Rendering [5.617649111108429]
We present DiffPose, a self-supervised approach that leverages patient-specific simulation and differentiable physics-based rendering to achieve accurate 2D/3D registration without relying on manually labeled data.
DiffPose achieves sub-millimeter accuracy across surgical datasets at intraoperative speeds, improving upon existing unsupervised methods by an order of magnitude and even outperforming supervised baselines.
arXiv Detail & Related papers (2023-12-11T13:05:54Z) - W-HMR: Monocular Human Mesh Recovery in World Space with Weak-Supervised Calibration [57.37135310143126]
Previous methods for 3D motion recovery from monocular images often fall short due to reliance on camera coordinates.
We introduce W-HMR, a weak-supervised calibration method that predicts "reasonable" focal lengths based on body distortion information.
We also present the OrientCorrect module, which corrects body orientation for plausible reconstructions in world space.
arXiv Detail & Related papers (2023-11-29T09:02:07Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - NeurAR: Neural Uncertainty for Autonomous 3D Reconstruction [64.36535692191343]
Implicit neural representations have shown compelling results in offline 3D reconstruction and also recently demonstrated the potential for online SLAM systems.
This paper addresses two key challenges: 1) seeking a criterion to measure the quality of the candidate viewpoints for the view planning based on the new representations, and 2) learning the criterion from data that can generalize to different scenes instead of hand-crafting one.
Our method demonstrates significant improvements on various metrics for the rendered image quality and the geometry quality of the reconstructed 3D models when compared with variants using TSDF or reconstruction without view planning.
arXiv Detail & Related papers (2022-07-22T10:05:36Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - Stereo Dense Scene Reconstruction and Accurate Laparoscope Localization
for Learning-Based Navigation in Robot-Assisted Surgery [37.14020061063255]
The computation of anatomical information and laparoscope position is a fundamental block of robot-assisted surgical navigation in Minimally Invasive Surgery (MIS)
We propose a learning-driven framework, in which an image-guided laparoscopic localization with 3D reconstructions of complex anatomical structures is hereby achieved.
arXiv Detail & Related papers (2021-10-08T06:12:18Z) - Tattoo tomography: Freehand 3D photoacoustic image reconstruction with
an optical pattern [49.240017254888336]
Photoacoustic tomography (PAT) is a novel imaging technique that can resolve both morphological and functional tissue properties.
A current drawback is the limited field-of-view provided by the conventionally applied 2D probes.
We present a novel approach to 3D reconstruction of PAT data that does not require an external tracking system.
arXiv Detail & Related papers (2020-11-10T09:27:56Z) - Probabilistic 3D surface reconstruction from sparse MRI information [58.14653650521129]
We present a novel probabilistic deep learning approach for concurrent 3D surface reconstruction from sparse 2D MR image data and aleatoric uncertainty prediction.
Our method is capable of reconstructing large surface meshes from three quasi-orthogonal MR imaging slices from limited training sets.
arXiv Detail & Related papers (2020-10-05T14:18:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.