Mixed Reality Communication for Medical Procedures: Teaching the
Placement of a Central Venous Catheter
- URL: http://arxiv.org/abs/2312.08624v1
- Date: Thu, 14 Dec 2023 03:11:20 GMT
- Title: Mixed Reality Communication for Medical Procedures: Teaching the
Placement of a Central Venous Catheter
- Authors: Manuel Rebol, Krzysztof Pietroszek, Claudia Ranniger, Colton Hood,
Adam Rutenberg, Neal Sikka, David Li, Christian G\"utl
- Abstract summary: We present a mixed reality real-time communication system to increase access to procedural skill training and to improve remote emergency assistance.
RGBD cameras capture a volumetric view of the local scene including the patient, the operator, and the medical equipment.
The volumetric capture is augmented onto the remote expert's view to allow the expert to spatially guide the local operator using visual and verbal instructions.
- Score: 5.0939439129897535
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical procedures are an essential part of healthcare delivery, and the
acquisition of procedural skills is a critical component of medical education.
Unfortunately, procedural skill is not evenly distributed among medical
providers. Skills may vary within departments or institutions, and across
geographic regions, depending on the provider's training and ongoing
experience. We present a mixed reality real-time communication system to
increase access to procedural skill training and to improve remote emergency
assistance. Our system allows a remote expert to guide a local operator through
a medical procedure. RGBD cameras capture a volumetric view of the local scene
including the patient, the operator, and the medical equipment. The volumetric
capture is augmented onto the remote expert's view to allow the expert to
spatially guide the local operator using visual and verbal instructions. We
evaluated our mixed reality communication system in a study in which experts
teach the ultrasound-guided placement of a central venous catheter (CVC) to
students in a simulation setting. The study compares state-of-the-art video
communication against our system. The results indicate that our system enhances
and offers new possibilities for visual communication compared to video
teleconference-based training.
Related papers
- Procedure-Aware Surgical Video-language Pretraining with Hierarchical Knowledge Augmentation [51.222684687924215]
Surgical video-language pretraining faces unique challenges due to the knowledge domain gap and the scarcity of multi-modal data.
We propose a hierarchical knowledge augmentation approach and a novel Procedure-Encoded Surgical Knowledge-Augmented Video-Language Pretraining framework to tackle these issues.
arXiv Detail & Related papers (2024-09-30T22:21:05Z) - Automated Patient Positioning with Learned 3D Hand Gestures [29.90000893655248]
We propose an automated patient positioning system that utilizes a camera to detect specific hand gestures from technicians.
Our approach relies on a novel multi-stage pipeline to recognize and interpret the technicians' gestures.
Results show that our system achieves accurate and precise patient positioning with minimal technician intervention.
arXiv Detail & Related papers (2024-07-20T15:32:24Z) - Benchmarking Large Language Models on Communicative Medical Coaching: a Novel System and Dataset [26.504409173684653]
We introduce "ChatCoach", a human-AI cooperative framework designed to assist medical learners in practicing their communication skills during patient consultations.
ChatCoachdifferentiates itself from conventional dialogue systems by offering a simulated environment where medical learners can practice dialogues with a patient agent, while a coach agent provides immediate, structured feedback.
We have developed a dataset specifically for evaluating Large Language Models (LLMs) within the ChatCoach framework on communicative medical coaching tasks.
arXiv Detail & Related papers (2024-02-08T10:32:06Z) - Deep Multimodal Fusion for Surgical Feedback Classification [70.53297887843802]
We leverage a clinically-validated five-category classification of surgical feedback.
We then develop a multi-label machine learning model to classify these five categories of surgical feedback from inputs of text, audio, and video modalities.
The ultimate goal of our work is to help automate the annotation of real-time contextual surgical feedback at scale.
arXiv Detail & Related papers (2023-12-06T01:59:47Z) - Learning Multi-modal Representations by Watching Hundreds of Surgical Video Lectures [51.78027546947034]
Recent advancements in surgical computer vision have been driven by vision-only models, which lack language semantics.
We propose leveraging surgical video lectures from e-learning platforms to provide effective vision and language supervisory signals.
We address surgery-specific linguistic challenges using multiple automatic speech recognition systems for text transcriptions.
arXiv Detail & Related papers (2023-07-27T22:38:12Z) - Validating a virtual human and automated feedback system for training
doctor-patient communication skills [3.0354760313198796]
We present the development and validation of a scalable, easily accessible, digital tool known as the Standardized Online Patient for Health Interaction Education (SOPHIE)
We found that participants who underwent SOPHIE performed significantly better than the control in overall communication, aggregate scores, empowering the patient, and showing empathy.
One day, we hope that SOPHIE will help make communication training resources more accessible by providing a scalable option to supplement existing resources.
arXiv Detail & Related papers (2023-06-27T05:23:08Z) - Live image-based neurosurgical guidance and roadmap generation using
unsupervised embedding [53.992124594124896]
We present a method for live image-only guidance leveraging a large data set of annotated neurosurgical videos.
A generated roadmap encodes the common anatomical paths taken in surgeries in the training set.
We trained and evaluated the proposed method with a data set of 166 transsphenoidal adenomectomy procedures.
arXiv Detail & Related papers (2023-03-31T12:52:24Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Robotic Surgery Remote Mentoring via AR with 3D Scene Streaming and Hand
Interaction [14.64569748299962]
We propose a novel AR-based robotic surgery remote mentoring system with efficient 3D scene visualization and natural 3D hand interaction.
Using a head-mounted display (i.e., HoloLens), the mentor can remotely monitor the procedure streamed from the trainee's operation side.
We validate the system on both real surgery stereo videos and ex-vivo scenarios of common robotic training tasks.
arXiv Detail & Related papers (2022-04-09T03:17:15Z) - An Interpretable Multiple-Instance Approach for the Detection of
referable Diabetic Retinopathy from Fundus Images [72.94446225783697]
We propose a machine learning system for the detection of referable Diabetic Retinopathy in fundus images.
By extracting local information from image patches and combining it efficiently through an attention mechanism, our system is able to achieve high classification accuracy.
We evaluate our approach on publicly available retinal image datasets, in which it exhibits near state-of-the-art performance.
arXiv Detail & Related papers (2021-03-02T13:14:15Z) - DAISI: Database for AI Surgical Instruction [0.0]
Telementoring surgeons as they perform surgery can be essential in the treatment of patients when in situ expertise is not available.
When mentors are unavailable, a fallback autonomous mechanism should provide medical practitioners with the required guidance.
This work presents the first Database for AI Surgical Instruction (DAISI)
arXiv Detail & Related papers (2020-03-22T22:07:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.