Shape Completion and Real-Time Visualization in Robotic Ultrasound Spine Acquisitions
- URL: http://arxiv.org/abs/2508.08923v1
- Date: Tue, 12 Aug 2025 13:19:37 GMT
- Title: Shape Completion and Real-Time Visualization in Robotic Ultrasound Spine Acquisitions
- Authors: Miruna-Alexandra Gafencu, Reem Shaban, Yordanka Velikova, Mohammad Farid Azampour, Nassir Navab,
- Abstract summary: We introduce a novel integrated system that combines robotic ultrasound with real-time completion to enhance spinal visualization.<n>Our robotic platform autonomously acquires US sweeps of the lumbar spine, extracts vertebral surfaces from ultrasound, and reconstructs the complete anatomy.<n>This framework provides interactive, real-time visualization with the capability to autonomously repeat scans and can enable navigation to target locations.
- Score: 35.52384354036415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ultrasound (US) imaging is increasingly used in spinal procedures due to its real-time, radiation-free capabilities; however, its effectiveness is hindered by shadowing artifacts that obscure deeper tissue structures. Traditional approaches, such as CT-to-US registration, incorporate anatomical information from preoperative CT scans to guide interventions, but they are limited by complex registration requirements, differences in spine curvature, and the need for recent CT imaging. Recent shape completion methods can offer an alternative by reconstructing spinal structures in US data, while being pretrained on large set of publicly available CT scans. However, these approaches are typically offline and have limited reproducibility. In this work, we introduce a novel integrated system that combines robotic ultrasound with real-time shape completion to enhance spinal visualization. Our robotic platform autonomously acquires US sweeps of the lumbar spine, extracts vertebral surfaces from ultrasound, and reconstructs the complete anatomy using a deep learning-based shape completion network. This framework provides interactive, real-time visualization with the capability to autonomously repeat scans and can enable navigation to target locations. This can contribute to better consistency, reproducibility, and understanding of the underlying anatomy. We validate our approach through quantitative experiments assessing shape completion accuracy and evaluations of multiple spine acquisition protocols on a phantom setup. Additionally, we present qualitative results of the visualization on a volunteer scan.
Related papers
- US-X Complete: A Multi-Modal Approach to Anatomical 3D Shape Recovery [38.0728021642047]
We present a novel multi-modal learning method for completing occluded anatomical structures in 3D ultrasound.<n>We generate paired training data consisting of: (1) 2D lateral vertebral views that simulate X-ray scans, and (2) 3D partial vertebrae representations.<n>Our method integrates morphological information both imaging modalities and demonstrates significant improvements in vertebral reconstruction.
arXiv Detail & Related papers (2025-11-19T16:45:04Z) - Semantic Segmentation for Preoperative Planning in Transcatheter Aortic Valve Replacement [61.573750959726475]
We consider medical guidelines for preoperative planning of the transcatheter aortic valve replacement (TAVR) and identify tasks that may be supported via semantic segmentation models.<n>We first derive fine-grained TAVR-relevant pseudo-labels from coarse-grained anatomical information, in order to train segmentation models and quantify how well they are able to find these structures in the scans.
arXiv Detail & Related papers (2025-07-22T13:24:45Z) - SurgPointTransformer: Vertebrae Shape Completion with RGB-D Data [0.0]
This study introduces an alternative, radiation-free approach for reconstructing the 3D spine anatomy using RGB-D data.
We introduce SurgPointTransformer, a shape completion approach for surgical applications that can accurately reconstruct the unexposed spine regions from sparse observations of the exposed surface.
Our method significantly outperforms the state-of-the-art baselines, achieving an average Chamfer Distance of 5.39, an F-Score of 0.85, an Earth Mover's Distance of 0.011, and a Signal-to-Noise Ratio of 22.90 dB.
arXiv Detail & Related papers (2024-10-02T11:53:28Z) - Class-Aware Cartilage Segmentation for Autonomous US-CT Registration in Robotic Intercostal Ultrasound Imaging [39.597735935731386]
A class-aware cartilage bone segmentation network with geometry-constraint post-processing is presented to capture patient-specific rib skeletons.
A dense skeleton graph-based non-rigid registration is presented to map the intercostal scanning path from a generic template to individual patients.
Results demonstrate that the proposed graph-based registration method can robustly and precisely map the path from CT template to individual patients.
arXiv Detail & Related papers (2024-06-06T14:15:15Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Domain adaptation strategies for 3D reconstruction of the lumbar spine using real fluoroscopy data [9.21828361691977]
This study tackles key obstacles in adopting surgical navigation in orthopedic surgeries.
It shows an approach for generating 3D anatomical models of the spine from only a few fluoroscopic images.
It achieved an 84% F1 score, matching the accuracy of our previous synthetic data-based research.
arXiv Detail & Related papers (2024-01-29T10:22:45Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Enabling Augmented Segmentation and Registration in Ultrasound-Guided
Spinal Surgery via Realistic Ultrasound Synthesis from Diagnostic CT Volume [19.177141722698188]
The scarcity of intra-operative clinical US data is an insurmountable bottleneck in training a neural network.
We propose an In silico bone US simulation framework that synthesizes realistic US images from diagnostic CT volume.
We train a lightweight vision transformer model that can achieve accurate and on-the-fly bone segmentation for spinal sonography.
arXiv Detail & Related papers (2023-01-05T07:28:06Z) - Towards Autonomous Atlas-based Ultrasound Acquisitions in Presence of
Articulated Motion [48.52403516006036]
This paper proposes a vision-based approach allowing autonomous robotic US limb scanning.
To this end, an atlas MRI template of a human arm with annotated vascular structures is used to generate trajectories.
In all cases, the system can successfully acquire the planned vascular structure on volunteers' limbs.
arXiv Detail & Related papers (2022-08-10T15:39:20Z) - Deep Learning for Ultrasound Beamforming [120.12255978513912]
Beamforming, the process of mapping received ultrasound echoes to the spatial image domain, lies at the heart of the ultrasound image formation chain.
Modern ultrasound imaging leans heavily on innovations in powerful digital receive channel processing.
Deep learning methods can play a compelling role in the digital beamforming pipeline.
arXiv Detail & Related papers (2021-09-23T15:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.