GraspLook: a VR-based Telemanipulation System with R-CNN-driven
Augmentation of Virtual Environment
- URL: http://arxiv.org/abs/2110.12518v1
- Date: Sun, 24 Oct 2021 19:50:39 GMT
- Title: GraspLook: a VR-based Telemanipulation System with R-CNN-driven
Augmentation of Virtual Environment
- Authors: Polina Ponomareva, Daria Trinitatova, Aleksey Fedoseev, Ivan Kalinov,
Dzmitry Tsetserukou
- Abstract summary: The paper proposes a novel system of teleoperation based on an augmented virtual environment.
The developed system allows users to operate the robot smoother, which leads to a decrease in task execution time.
- Score: 3.7003629688390896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The teleoperation of robotic systems in medical applications requires stable
and convenient visual feedback for the operator. The most accessible approach
to delivering visual information from the remote area is using cameras to
transmit a video stream from the environment. However, such systems are
sensitive to the camera resolution, limited viewpoints, and cluttered
environment bringing additional mental demands to the human operator. The paper
proposes a novel system of teleoperation based on an augmented virtual
environment (VE). The region-based convolutional neural network (R-CNN) is
applied to detect the laboratory instrument and estimate its position in the
remote environment to display further its digital twin in the VE, which is
necessary for dexterous telemanipulation. The experimental results revealed
that the developed system allows users to operate the robot smoother, which
leads to a decrease in task execution time when manipulating test tubes. In
addition, the participants evaluated the developed system as less mentally
demanding (by 11%) and requiring less effort (by 16%) to accomplish the task
than the camera-based teleoperation approach and highly assessed their
performance in the augmented VE. The proposed technology can be potentially
applied for conducting laboratory tests in remote areas when operating with
infectious and poisonous reagents.
Related papers
- AnyTeleop: A General Vision-Based Dexterous Robot Arm-Hand Teleoperation System [51.48191418148764]
Vision-based teleoperation can endow robots with human-level intelligence to interact with the environment.
Current vision-based teleoperation systems are designed and engineered towards a particular robot model and deploy environment.
We propose AnyTeleop, a unified and general teleoperation system to support multiple different arms, hands, realities, and camera configurations within a single system.
arXiv Detail & Related papers (2023-07-10T14:11:07Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Deep Multi-Emitter Spectrum Occupancy Mapping that is Robust to the
Number of Sensors, Noise and Threshold [32.880113150521154]
One of the primary goals in spectrum occupancy mapping is to create a system that is robust to assumptions about the number of sensors, occupancy threshold (in dBm), sensor noise, number of emitters and the propagation environment.
We show that such a system may be designed with neural networks using a process of aggregation to allow a variable number of sensors during training and testing.
arXiv Detail & Related papers (2022-11-27T14:08:11Z) - Virtual Reality via Object Poses and Active Learning: Realizing
Telepresence Robots with Aerial Manipulation Capabilities [39.29763956979895]
This article presents a novel telepresence system for advancing aerial manipulation in dynamic and unstructured environments.
The proposed system not only features a haptic device, but also a virtual reality (VR) interface that provides real-time 3D displays of the robot's workspace.
We show over 70 robust executions of pick-and-place, force application and peg-in-hole tasks with the DLR cable-Suspended Aerial Manipulator (SAM)
arXiv Detail & Related papers (2022-10-18T08:42:30Z) - Optical flow-based branch segmentation for complex orchard environments [73.11023209243326]
We train a neural network system in simulation only using simulated RGB data and optical flow.
This resulting neural network is able to perform foreground segmentation of branches in a busy orchard environment without additional real-world training or using any special setup or equipment beyond a standard camera.
Our results show that our system is highly accurate and, when compared to a network using manually labeled RGBD data, achieves significantly more consistent and robust performance across environments that differ from the training set.
arXiv Detail & Related papers (2022-02-26T03:38:20Z) - Autonomous Aerial Robot for High-Speed Search and Intercept Applications [86.72321289033562]
A fully-autonomous aerial robot for high-speed object grasping has been proposed.
As an additional sub-task, our system is able to autonomously pierce balloons located in poles close to the surface.
Our approach has been validated in a challenging international competition and has shown outstanding results.
arXiv Detail & Related papers (2021-12-10T11:49:51Z) - AEGIS: A real-time multimodal augmented reality computer vision based
system to assist facial expression recognition for individuals with autism
spectrum disorder [93.0013343535411]
This paper presents the development of a multimodal augmented reality (AR) system which combines the use of computer vision and deep convolutional neural networks (CNN)
The proposed system, which we call AEGIS, is an assistive technology deployable on a variety of user devices including tablets, smartphones, video conference systems, or smartglasses.
We leverage both spatial and temporal information in order to provide an accurate expression prediction, which is then converted into its corresponding visualization and drawn on top of the original video frame.
arXiv Detail & Related papers (2020-10-22T17:20:38Z) - Detection and Localization of Robotic Tools in Robot-Assisted Surgery
Videos Using Deep Neural Networks for Region Proposal and Detection [30.042965489804356]
We propose a solution to the tool detection and localization open problem in RAS video understanding.
We propose an architecture using multimodal convolutional neural networks for fast detection and localization of tools in RAS videos.
Our results with an Average Precision (AP) of 91% and a mean time of 0.1 seconds per test frame detection indicate that our study is superior to conventionally used methods for medical imaging.
arXiv Detail & Related papers (2020-07-29T10:59:15Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z) - A Markerless Deep Learning-based 6 Degrees of Freedom PoseEstimation for
with Mobile Robots using RGB Data [3.4806267677524896]
We propose a method to deploy state of the art neural networks for real time 3D object localization on augmented reality devices.
We focus on fast 2D detection approaches which are extracting the 3D pose of the object fast and accurately by using only 2D input.
For the 6D annotation of 2D images, we developed an annotation tool, which is, to our knowledge, the first open source tool to be available.
arXiv Detail & Related papers (2020-01-16T09:13:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.