Automated Patient Positioning with Learned 3D Hand Gestures
- URL: http://arxiv.org/abs/2407.14903v1
- Date: Sat, 20 Jul 2024 15:32:24 GMT
- Title: Automated Patient Positioning with Learned 3D Hand Gestures
- Authors: Zhongpai Gao, Abhishek Sharma, Meng Zheng, Benjamin Planche, Terrence Chen, Ziyan Wu,
- Abstract summary: We propose an automated patient positioning system that utilizes a camera to detect specific hand gestures from technicians.
Our approach relies on a novel multi-stage pipeline to recognize and interpret the technicians' gestures.
Results show that our system achieves accurate and precise patient positioning with minimal technician intervention.
- Score: 29.90000893655248
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Positioning patients for scanning and interventional procedures is a critical task that requires high precision and accuracy. The conventional workflow involves manually adjusting the patient support to align the center of the target body part with the laser projector or other guiding devices. This process is not only time-consuming but also prone to inaccuracies. In this work, we propose an automated patient positioning system that utilizes a camera to detect specific hand gestures from technicians, allowing users to indicate the target patient region to the system and initiate automated positioning. Our approach relies on a novel multi-stage pipeline to recognize and interpret the technicians' gestures, translating them into precise motions of medical devices. We evaluate our proposed pipeline during actual MRI scanning procedures, using RGB-Depth cameras to capture the process. Results show that our system achieves accurate and precise patient positioning with minimal technician intervention. Furthermore, we validate our method on HaGRID, a large-scale hand gesture dataset, demonstrating its effectiveness in hand detection and gesture recognition.
Related papers
- Autonomous Robotic Ultrasound System for Liver Follow-up Diagnosis: Pilot Phantom Study [9.293259833488223]
The paper introduces a novel autonomous robot ultrasound (US) system targeting liver follow-up scans for outpatients in local communities.
We can achieve precise imaging of 3D hepatic veins, facilitating accurate coordinate mapping between CT and the robot.
The proposed framework holds the potential to significantly reduce time and costs for healthcare providers, clinicians, and follow-up patients.
arXiv Detail & Related papers (2024-05-09T14:11:20Z) - Training-free image style alignment for self-adapting domain shift on
handheld ultrasound devices [54.476120039032594]
We propose the Training-free Image Style Alignment (TISA) framework to align the style of handheld device data to those of standard devices.
TISA can directly infer handheld device images without extra training and is suited for clinical applications.
arXiv Detail & Related papers (2024-02-17T07:15:23Z) - Data-Driven Goal Recognition in Transhumeral Prostheses Using Process
Mining Techniques [7.95507524742396]
Active prostheses utilize real-valued, continuous sensor data to recognize patient target poses, or goals, and proactively move the artificial limb.
Previous studies have examined how well the data collected in stationary poses, without considering the time steps, can help discriminate the goals.
Our approach involves transforming the data into discrete events and training an existing process mining-based goal recognition system.
arXiv Detail & Related papers (2023-09-15T02:03:59Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - Next-generation Surgical Navigation: Marker-less Multi-view 6DoF Pose
Estimation of Surgical Instruments [66.74633676595889]
We present a multi-camera capture setup consisting of static and head-mounted cameras.
Second, we publish a multi-view RGB-D video dataset of ex-vivo spine surgeries, captured in a surgical wet lab and a real operating theatre.
Third, we evaluate three state-of-the-art single-view and multi-view methods for the task of 6DoF pose estimation of surgical instruments.
arXiv Detail & Related papers (2023-05-05T13:42:19Z) - EasyHeC: Accurate and Automatic Hand-eye Calibration via Differentiable
Rendering and Space Exploration [49.90228618894857]
We introduce a new approach to hand-eye calibration called EasyHeC, which is markerless, white-box, and delivers superior accuracy and robustness.
We propose to use two key technologies: differentiable rendering-based camera pose optimization and consistency-based joint space exploration.
Our evaluation demonstrates superior performance in synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-02T03:49:54Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Towards Predicting Fine Finger Motions from Ultrasound Images via
Kinematic Representation [12.49914980193329]
We study the inference problem of identifying the activation of specific fingers from a sequence of US images.
We consider this task as an important step towards higher adoption rates of robotic prostheses among arm amputees.
arXiv Detail & Related papers (2022-02-10T18:05:09Z) - Mapping Surgeon's Hand/Finger Motion During Conventional Microsurgery to
Enhance Intuitive Surgical Robot Teleoperation [0.5635300481123077]
Current human-robot interfaces lack intuitive teleoperation and cannot mimic surgeon's hand/finger sensing and fine motion.
We report a pilot study showing an intuitive way of recording and mapping surgeon's gross hand motion and the fine synergic motion during cardiac micro-surgery.
arXiv Detail & Related papers (2021-02-21T11:21:30Z) - One-shot action recognition towards novel assistive therapies [63.23654147345168]
This work is motivated by the automated analysis of medical therapies that involve action imitation games.
The presented approach incorporates a pre-processing step that standardizes heterogeneous motion data conditions.
We evaluate the approach on a real use-case of automated video analysis for therapy support with autistic people.
arXiv Detail & Related papers (2021-02-17T19:41:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.