ColibriDoc: An Eye-in-Hand Autonomous Trocar Docking System
- URL: http://arxiv.org/abs/2111.15373v1
- Date: Tue, 30 Nov 2021 13:21:37 GMT
- Title: ColibriDoc: An Eye-in-Hand Autonomous Trocar Docking System
- Authors: Shervin Dehghani, Michael Sommersperger, Junjie Yang, Benjamin Busam,
Kai Huang, Peter Gehlbach, Iulian Iordachita, Nassir Navab and M. Ali Nasseri
- Abstract summary: We present a platform for autonomous trocar docking that combines computer vision and a robotic setup.
Inspired by the Cuban Colibri (hummingbird) aligning its beak to a flower using only vision, we mount a camera onto the endeffector of a robotic system.
- Score: 46.91300647669861
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Retinal surgery is a complex medical procedure that requires exceptional
expertise and dexterity. For this purpose, several robotic platforms are
currently being developed to enable or improve the outcome of microsurgical
tasks. Since the control of such robots is often designed for navigation inside
the eye in proximity to the retina, successful trocar docking and inserting the
instrument into the eye represents an additional cognitive effort, and is,
therefore, one of the open challenges in robotic retinal surgery. For this
purpose, we present a platform for autonomous trocar docking that combines
computer vision and a robotic setup. Inspired by the Cuban Colibri
(hummingbird) aligning its beak to a flower using only vision, we mount a
camera onto the endeffector of a robotic system. By estimating the position and
pose of the trocar, the robot is able to autonomously align and navigate the
instrument towards the Trocar's Entry Point (TEP) and finally perform the
insertion. Our experiments show that the proposed method is able to accurately
estimate the position and pose of the trocar and achieve repeatable autonomous
docking. The aim of this work is to reduce the complexity of robotic setup
preparation prior to the surgical task and therefore, increase the
intuitiveness of the system integration into the clinical workflow.
Related papers
- Unifying 3D Representation and Control of Diverse Robots with a Single Camera [48.279199537720714]
We introduce Neural Jacobian Fields, an architecture that autonomously learns to model and control robots from vision alone.
Our approach achieves accurate closed-loop control and recovers the causal dynamic structure of each robot.
arXiv Detail & Related papers (2024-07-11T17:55:49Z) - Teach Me How to Learn: A Perspective Review towards User-centered
Neuro-symbolic Learning for Robotic Surgical Systems [3.5672486441844553]
Recent advances in machine learning allowed robots to identify objects on a perceptual nonsymbolic level.
An alternative solution is to teach a robot on both perceptual nonsymbolic and conceptual symbolic levels.
This work proposes a concept for this user-centered hybrid learning paradigm that focuses on robotic surgical situations.
arXiv Detail & Related papers (2023-07-07T21:58:28Z) - Bio-inspired spike-based Hippocampus and Posterior Parietal Cortex
models for robot navigation and environment pseudo-mapping [52.77024349608834]
This work proposes a spike-based robotic navigation and environment pseudomapping system.
The hippocampus is in charge of maintaining a representation of an environment state map, and the PPC is in charge of local decision-making.
This is the first implementation of an environment pseudo-mapping system with dynamic learning based on a bio-inspired hippocampal memory.
arXiv Detail & Related papers (2023-05-22T10:20:34Z) - Image Guidance for Robot-Assisted Ankle Fracture Repair [0.0]
The aim is to produce and demonstrate proper functioning of software for automatic determination of directions for fibular repositioning.
The product will not involve developing or implementing the hardware of the robot itself.
arXiv Detail & Related papers (2023-01-31T07:32:13Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - See, Hear, and Feel: Smart Sensory Fusion for Robotic Manipulation [49.925499720323806]
We study how visual, auditory, and tactile perception can jointly help robots to solve complex manipulation tasks.
We build a robot system that can see with a camera, hear with a contact microphone, and feel with a vision-based tactile sensor.
arXiv Detail & Related papers (2022-12-07T18:55:53Z) - Autonomous Intraluminal Navigation of a Soft Robot using
Deep-Learning-based Visual Servoing [13.268863900187025]
We present a synergic solution for intraluminal navigation consisting of a 3D printed endoscopic soft robot.
Visual servoing, based on Convolutional Neural Networks (CNNs), is used to achieve the autonomous navigation task.
The proposed robot is validated in anatomical phantoms in different path configurations.
arXiv Detail & Related papers (2022-07-01T13:17:45Z) - Embedded Computer Vision System Applied to a Four-Legged Line Follower
Robot [0.0]
This project aims to drive a robot using an automated computer vision embedded system, connecting the robot's vision to its behavior.
The robot is applied on a typical mobile robot's issue: line following.
Decision making of where to move next is based on the line center of the path and is fully automated.
arXiv Detail & Related papers (2021-01-12T23:52:53Z) - Morphology-Agnostic Visual Robotic Control [76.44045983428701]
MAVRIC is an approach that works with minimal prior knowledge of the robot's morphology.
We demonstrate our method on visually-guided 3D point reaching, trajectory following, and robot-to-robot imitation.
arXiv Detail & Related papers (2019-12-31T15:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.