Using Conditional Generative Adversarial Networks to Reduce the Effects
of Latency in Robotic Telesurgery
- URL: http://arxiv.org/abs/2010.11704v1
- Date: Wed, 7 Oct 2020 13:40:44 GMT
- Title: Using Conditional Generative Adversarial Networks to Reduce the Effects
of Latency in Robotic Telesurgery
- Authors: Neil Sachdeva, Misha Klopukh, Rachel St. Clair, William Hahn
- Abstract summary: In surgery, any micro-delay can injure a patient severely and in some cases, result in fatality.
Current surgical robots use calibrated sensors to measure the position of the arms and tools.
In this work we present a purely optical approach that provides a measurement of the tool position in relation to the patient's tissues.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The introduction of surgical robots brought about advancements in surgical
procedures. The applications of remote telesurgery range from building medical
clinics in underprivileged areas, to placing robots abroad in military
hot-spots where accessibility and diversity of medical experience may be
limited. Poor wireless connectivity may result in a prolonged delay, referred
to as latency, between a surgeon's input and action a robot takes. In surgery,
any micro-delay can injure a patient severely and in some cases, result in
fatality. One was to increase safety is to mitigate the effects of latency
using deep learning aided computer vision. While the current surgical robots
use calibrated sensors to measure the position of the arms and tools, in this
work we present a purely optical approach that provides a measurement of the
tool position in relation to the patient's tissues. This research aimed to
produce a neural network that allowed a robot to detect its own mechanical
manipulator arms. A conditional generative adversarial networks (cGAN) was
trained on 1107 frames of mock gastrointestinal robotic surgery data from the
2015 EndoVis Instrument Challenge and corresponding hand-drawn labels for each
frame. When run on new testing data, the network generated near-perfect labels
of the input images which were visually consistent with the hand-drawn labels
and was able to do this in 299 milliseconds. These accurately generated labels
can then be used as simplified identifiers for the robot to track its own
controlled tools. These results show potential for conditional GANs as a
reaction mechanism such that the robot can detect when its arms move outside
the operating area within a patient. This system allows for more accurate
monitoring of the position of surgical instruments in relation to the patient's
tissue, increasing safety measures that are integral to successful telesurgery
systems.
Related papers
- Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Surgical tool classification and localization: results and methods from
the MICCAI 2022 SurgToolLoc challenge [69.91670788430162]
We present the results of the SurgLoc 2022 challenge.
The goal was to leverage tool presence data as weak labels for machine learning models trained to detect tools.
We conclude by discussing these results in the broader context of machine learning and surgical data science.
arXiv Detail & Related papers (2023-05-11T21:44:39Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Reconstructing Robot Operations via Radio-Frequency Side-Channel [1.0742675209112622]
In recent years, a variety of attacks have been proposed that actively target the robot itself from the cyber domain.
In this work, we investigate whether an insider adversary can accurately fingerprint robot movements and operational warehousing via the radio frequency side channel.
arXiv Detail & Related papers (2022-09-21T08:14:51Z) - Learning needle insertion from sample task executions [0.0]
The data of robotic surgery can be easily logged where the collected data can be used to learn task models.
We present a needle insertion dataset including 60 successful trials recorded by 3 pair of stereo cameras.
We also present Deep-robot Learning from Demonstrations that predicts the desired state of the robot at the time step after t.
arXiv Detail & Related papers (2021-03-14T14:23:17Z) - Mapping Surgeon's Hand/Finger Motion During Conventional Microsurgery to
Enhance Intuitive Surgical Robot Teleoperation [0.5635300481123077]
Current human-robot interfaces lack intuitive teleoperation and cannot mimic surgeon's hand/finger sensing and fine motion.
We report a pilot study showing an intuitive way of recording and mapping surgeon's gross hand motion and the fine synergic motion during cardiac micro-surgery.
arXiv Detail & Related papers (2021-02-21T11:21:30Z) - AURSAD: Universal Robot Screwdriving Anomaly Detection Dataset [80.6725125503521]
This report describes a dataset created using a UR3e series robot and OnRobot Screwdriver.
The resulting data contains 2042 samples of normal and anomalous robot operation.
Brief ML benchmarks using this data are also provided, showcasing the data's suitability and potential for further analysis and experimentation.
arXiv Detail & Related papers (2021-02-02T09:59:23Z) - Projection Mapping Implementation: Enabling Direct Externalization of
Perception Results and Action Intent to Improve Robot Explainability [62.03014078810652]
Existing research on non-verbal cues, e.g., eye gaze or arm movement, may not accurately present a robot's internal states.
Projecting the states directly onto a robot's operating environment has the advantages of being direct, accurate, and more salient.
arXiv Detail & Related papers (2020-10-05T18:16:20Z) - An Intelligent Non-Invasive Real Time Human Activity Recognition System
for Next-Generation Healthcare [9.793913891417912]
Human motion can be used to provide remote healthcare solutions for vulnerable people.
At present wearable devices can provide real time monitoring by deploying equipment on a person's body.
This paper demonstrates how human motions can be detected in quasi-real-time scenario using a non-invasive method.
arXiv Detail & Related papers (2020-08-06T10:51:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.