A Perspective on Robotic Telepresence and Teleoperation using Cognition:
Are we there yet?
- URL: http://arxiv.org/abs/2203.02959v1
- Date: Sun, 6 Mar 2022 13:10:00 GMT
- Title: A Perspective on Robotic Telepresence and Teleoperation using Cognition:
Are we there yet?
- Authors: Hrishav Bakul Barua, Ashis Sau, Ruddra dev Roychoudhury
- Abstract summary: With the Artificial Intelligence (AI) revolution already being started, we can see a wide range of robotic applications being realized.
These technologies find significant application in health care, education, surveillance, disaster recovery, and corporate/government sectors.
But question still remains about their maturity, security and safety levels.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Telepresence and teleoperation robotics have attracted a great amount of
attention in the last 10 years. With the Artificial Intelligence (AI)
revolution already being started, we can see a wide range of robotic
applications being realized. Intelligent robotic systems are being deployed
both in industrial and domestic environments. Telepresence is the idea of being
present in a remote location virtually or via robotic avatars. Similarly, the
idea of operating a robot from a remote location for various tasks is called
teleoperation. These technologies find significant application in health care,
education, surveillance, disaster recovery, and corporate/government sectors.
But question still remains about their maturity, security and safety levels. We
also need to think about enhancing the user experience and trust in such
technologies going into the next generation of computing.
Related papers
- Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - Open-TeleVision: Teleoperation with Immersive Active Visual Feedback [17.505318269362512]
Open-TeleVision allows operators to actively perceive the robot's surroundings in a stereoscopic manner.
The system mirrors the operator's arm and hand movements on the robot, creating an immersive experience.
We validate the effectiveness of our system by collecting data and training imitation learning policies on four long-horizon, precise tasks.
arXiv Detail & Related papers (2024-07-01T17:55:35Z) - Amplifying robotics capacities with a human touch: An immersive
low-latency panoramic remote system [16.97496024217201]
"Avatar" system is an immersive low-latency panoramic human-robot interaction platform.
Under favorable network conditions, we achieved a low-latency high-definition panoramic visual experience with a delay of 357ms.
The system enables remote control over vast physical distances, spanning campuses, provinces, countries, and even continents.
arXiv Detail & Related papers (2024-01-07T06:55:41Z) - Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - AnyTeleop: A General Vision-Based Dexterous Robot Arm-Hand Teleoperation System [51.48191418148764]
Vision-based teleoperation can endow robots with human-level intelligence to interact with the environment.
Current vision-based teleoperation systems are designed and engineered towards a particular robot model and deploy environment.
We propose AnyTeleop, a unified and general teleoperation system to support multiple different arms, hands, realities, and camera configurations within a single system.
arXiv Detail & Related papers (2023-07-10T14:11:07Z) - See, Hear, and Feel: Smart Sensory Fusion for Robotic Manipulation [49.925499720323806]
We study how visual, auditory, and tactile perception can jointly help robots to solve complex manipulation tasks.
We build a robot system that can see with a camera, hear with a contact microphone, and feel with a vision-based tactile sensor.
arXiv Detail & Related papers (2022-12-07T18:55:53Z) - Artificial Intelligence for the Metaverse: A Survey [66.57225253532748]
We first deliver a preliminary of AI, including machine learning algorithms and deep learning architectures, and its role in the metaverse.
We then convey a comprehensive investigation of AI-based methods concerning six technical aspects that have potentials for the metaverse.
Several AI-aided applications, such as healthcare, manufacturing, smart cities, and gaming, are studied to be deployed in the virtual worlds.
arXiv Detail & Related papers (2022-02-15T03:34:56Z) - Design and Development of Autonomous Delivery Robot [0.16863755729554888]
We present an autonomous mobile robot platform that delivers the package within the VNIT campus without any human intercommunication.
The entire pipeline of an autonomous robot working in outdoor environments is explained in this thesis.
arXiv Detail & Related papers (2021-03-16T17:57:44Z) - OpenBot: Turning Smartphones into Robots [95.94432031144716]
Current robots are either expensive or make significant compromises on sensory richness, computational power, and communication capabilities.
We propose to leverage smartphones to equip robots with extensive sensor suites, powerful computational abilities, state-of-the-art communication channels, and access to a thriving software ecosystem.
We design a small electric vehicle that costs $50 and serves as a robot body for standard Android smartphones.
arXiv Detail & Related papers (2020-08-24T18:04:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.