HoloPOCUS: Portable Mixed-Reality 3D Ultrasound Tracking, Reconstruction
and Overlay
- URL: http://arxiv.org/abs/2308.13823v1
- Date: Sat, 26 Aug 2023 09:28:20 GMT
- Title: HoloPOCUS: Portable Mixed-Reality 3D Ultrasound Tracking, Reconstruction
and Overlay
- Authors: Kian Wei Ng, Yujia Gao, Shaheryar Mohammed Furqan, Zachery Yeo, Joel
Lau, Kee Yuan Ngiam, Eng Tat Khoo
- Abstract summary: HoloPOCUS is a mixed reality US system that overlays rich US information onto the user's vision in a point-of-care setting.
We validated a tracking pipeline that demonstrates higher accuracy compared to existing MR-US works.
- Score: 2.069072041357411
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ultrasound (US) imaging provides a safe and accessible solution to procedural
guidance and diagnostic imaging. The effective usage of conventional 2D US for
interventional guidance requires extensive experience to project the image
plane onto the patient, and the interpretation of images in diagnostics suffers
from high intra- and inter-user variability. 3D US reconstruction allows for
more consistent diagnosis and interpretation, but existing solutions are
limited in terms of equipment and applicability in real-time navigation. To
address these issues, we propose HoloPOCUS - a mixed reality US system (MR-US)
that overlays rich US information onto the user's vision in a point-of-care
setting. HoloPOCUS extends existing MR-US methods beyond placing a US plane in
the user's vision to include a 3D reconstruction and projection that can aid in
procedural guidance using conventional probes. We validated a tracking pipeline
that demonstrates higher accuracy compared to existing MR-US works.
Furthermore, user studies conducted via a phantom task showed significant
improvements in navigation duration when using our proposed methods.
Related papers
- EndoGSLAM: Real-Time Dense Reconstruction and Tracking in Endoscopic Surgeries using Gaussian Splatting [53.38166294158047]
EndoGSLAM is an efficient approach for endoscopic surgeries, which integrates streamlined representation and differentiable Gaussianization.
Experiments show that EndoGSLAM achieves a better trade-off between intraoperative availability and reconstruction quality than traditional or neural SLAM approaches.
arXiv Detail & Related papers (2024-03-22T11:27:43Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - Next-generation Surgical Navigation: Marker-less Multi-view 6DoF Pose
Estimation of Surgical Instruments [66.74633676595889]
We present a multi-camera capture setup consisting of static and head-mounted cameras.
Second, we publish a multi-view RGB-D video dataset of ex-vivo spine surgeries, captured in a surgical wet lab and a real operating theatre.
Third, we evaluate three state-of-the-art single-view and multi-view methods for the task of 6DoF pose estimation of surgical instruments.
arXiv Detail & Related papers (2023-05-05T13:42:19Z) - Agent with Tangent-based Formulation and Anatomical Perception for
Standard Plane Localization in 3D Ultrasound [56.7645826576439]
We introduce a novel reinforcement learning framework for automatic SP localization in 3D US.
First, we formulate SP localization in 3D US as a tangent-point-based problem in RL to restructure the action space.
Second, we design an auxiliary task learning strategy to enhance the model's ability to recognize subtle differences crossing Non-SPs and SPs in plane search.
arXiv Detail & Related papers (2022-07-01T14:53:27Z) - VesNet-RL: Simulation-based Reinforcement Learning for Real-World US
Probe Navigation [39.7566010845081]
In freehand US examinations, sonographers often navigate a US probe to visualize standard examination planes with rich diagnostic information.
We propose a simulation-based RL framework for real-world navigation of US probes towards the standard longitudinal views of vessels.
arXiv Detail & Related papers (2022-05-10T09:34:42Z) - 3D endoscopic depth estimation using 3D surface-aware constraints [16.161276518580262]
We show that depth estimation can be reformed from a 3D surface perspective.
We propose a loss function for depth estimation that integrates the surface-aware constraints.
Camera parameters are incorporated into the training pipeline to increase the control and transparency of the depth estimation.
arXiv Detail & Related papers (2022-03-04T04:47:20Z) - Tunable Image Quality Control of 3-D Ultrasound using Switchable
CycleGAN [25.593462273575625]
A 3-D US imaging system can visualize a volume along three axial planes.
The 3-D US has an inherent limitation in resolution compared to the 2-D US.
We propose a novel unsupervised deep learning approach to improve 3-D US image quality.
arXiv Detail & Related papers (2021-12-06T09:40:16Z) - Image-Guided Navigation of a Robotic Ultrasound Probe for Autonomous
Spinal Sonography Using a Shadow-aware Dual-Agent Framework [35.17207004351791]
We propose a novel dual-agent framework that integrates a reinforcement learning agent and a deep learning agent.
Our method can effectively interpret the US images and navigate the probe to acquire multiple standard views of the spine.
arXiv Detail & Related papers (2021-11-03T12:11:27Z) - Deep Learning for Ultrasound Beamforming [120.12255978513912]
Beamforming, the process of mapping received ultrasound echoes to the spatial image domain, lies at the heart of the ultrasound image formation chain.
Modern ultrasound imaging leans heavily on innovations in powerful digital receive channel processing.
Deep learning methods can play a compelling role in the digital beamforming pipeline.
arXiv Detail & Related papers (2021-09-23T15:15:21Z) - Tattoo tomography: Freehand 3D photoacoustic image reconstruction with
an optical pattern [49.240017254888336]
Photoacoustic tomography (PAT) is a novel imaging technique that can resolve both morphological and functional tissue properties.
A current drawback is the limited field-of-view provided by the conventionally applied 2D probes.
We present a novel approach to 3D reconstruction of PAT data that does not require an external tracking system.
arXiv Detail & Related papers (2020-11-10T09:27:56Z) - Sensorless Freehand 3D Ultrasound Reconstruction via Deep Contextual
Learning [13.844630500061378]
Current methods for 3D volume reconstruction from freehand US scans require external tracking devices to provide spatial position for every frame.
We propose a deep contextual learning network (DCL-Net), which can efficiently exploit the image feature relationship between US frames and reconstruct 3D US volumes without any tracking device.
arXiv Detail & Related papers (2020-06-13T18:37:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.