Tethered Aerial Visual Assistance
- URL: http://arxiv.org/abs/2001.06347v1
- Date: Wed, 15 Jan 2020 06:41:04 GMT
- Title: Tethered Aerial Visual Assistance
- Authors: Xuesu Xiao, Jan Dufek, Robin R. Murphy
- Abstract summary: An autonomous tethered Unmanned Aerial Vehicle (UAV) is developed into a visual assistant in a marsupial co-robots team.
Using a fundamental viewpoint quality theory, a formal risk reasoning framework, and a newly developed tethered motion suite, our visual assistant is able to autonomously navigate to good-quality viewpoints.
The developed marsupial co-robots team could improve tele-operation efficiency in nuclear operations, bomb squad, disaster robots, and other domains.
- Score: 5.237054164442403
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, an autonomous tethered Unmanned Aerial Vehicle (UAV) is
developed into a visual assistant in a marsupial co-robots team, collaborating
with a tele-operated Unmanned Ground Vehicle (UGV) for robot operations in
unstructured or confined environments. These environments pose extreme
challenges to the remote tele-operator due to the lack of sufficient
situational awareness, mostly caused by the unstructuredness and confinement,
stationary and limited field-of-view and lack of depth perception from the
robot's onboard cameras. To overcome these problems, a secondary tele-operated
robot is used in current practices, who acts as a visual assistant and provides
external viewpoints to overcome the perceptual limitations of the primary
robot's onboard sensors. However, a second tele-operated robot requires extra
manpower and teamwork demand between primary and secondary operators. The
manually chosen viewpoints tend to be subjective and sub-optimal. Considering
these intricacies, we develop an autonomous tethered aerial visual assistant in
place of the secondary tele-operated robot and operator, to reduce human robot
ratio from 2:2 to 1:2. Using a fundamental viewpoint quality theory, a formal
risk reasoning framework, and a newly developed tethered motion suite, our
visual assistant is able to autonomously navigate to good-quality viewpoints in
a risk-aware manner through unstructured or confined spaces with a tether. The
developed marsupial co-robots team could improve tele-operation efficiency in
nuclear operations, bomb squad, disaster robots, and other domains with novel
tasks or highly occluded environments, by reducing manpower and teamwork
demand, and achieving better visual assistance quality with trustworthy
risk-aware motion.
Related papers
- Unifying 3D Representation and Control of Diverse Robots with a Single Camera [48.279199537720714]
We introduce Neural Jacobian Fields, an architecture that autonomously learns to model and control robots from vision alone.
Our approach achieves accurate closed-loop control and recovers the causal dynamic structure of each robot.
arXiv Detail & Related papers (2024-07-11T17:55:49Z) - Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - Open-TeleVision: Teleoperation with Immersive Active Visual Feedback [17.505318269362512]
Open-TeleVision allows operators to actively perceive the robot's surroundings in a stereoscopic manner.
The system mirrors the operator's arm and hand movements on the robot, creating an immersive experience.
We validate the effectiveness of our system by collecting data and training imitation learning policies on four long-horizon, precise tasks.
arXiv Detail & Related papers (2024-07-01T17:55:35Z) - Amplifying robotics capacities with a human touch: An immersive
low-latency panoramic remote system [16.97496024217201]
"Avatar" system is an immersive low-latency panoramic human-robot interaction platform.
Under favorable network conditions, we achieved a low-latency high-definition panoramic visual experience with a delay of 357ms.
The system enables remote control over vast physical distances, spanning campuses, provinces, countries, and even continents.
arXiv Detail & Related papers (2024-01-07T06:55:41Z) - AnyTeleop: A General Vision-Based Dexterous Robot Arm-Hand Teleoperation System [51.48191418148764]
Vision-based teleoperation can endow robots with human-level intelligence to interact with the environment.
Current vision-based teleoperation systems are designed and engineered towards a particular robot model and deploy environment.
We propose AnyTeleop, a unified and general teleoperation system to support multiple different arms, hands, realities, and camera configurations within a single system.
arXiv Detail & Related papers (2023-07-10T14:11:07Z) - Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone
Racing [52.50284630866713]
Existing systems often require hand-engineered components for state estimation, planning, and control.
This paper tackles the vision-based autonomous-drone-racing problem by learning deep sensorimotor policies.
arXiv Detail & Related papers (2022-10-26T19:03:17Z) - Marvin: Innovative Omni-Directional Robotic Assistant for Domestic
Environments [0.0]
We present Marvin, a novel assistive robot we developed with a modular layer-based architecture.
We propose an omnidirectional platform provided with four mecanum wheels, which enable autonomous navigation.
Deep learning solutions for visual perception, person pose classification and vocal command completely run on the embedded hardware of the robot.
arXiv Detail & Related papers (2021-12-10T15:27:53Z) - Autonomous Aerial Robot for High-Speed Search and Intercept Applications [86.72321289033562]
A fully-autonomous aerial robot for high-speed object grasping has been proposed.
As an additional sub-task, our system is able to autonomously pierce balloons located in poles close to the surface.
Our approach has been validated in a challenging international competition and has shown outstanding results.
arXiv Detail & Related papers (2021-12-10T11:49:51Z) - An Embarrassingly Pragmatic Introduction to Vision-based Autonomous
Robots [0.0]
We develop a small-scale autonomous vehicle capable of understanding the scene using only visual information.
We discuss the current state of Robotics and autonomous driving and the technological and ethical limitations that we can find in this field.
arXiv Detail & Related papers (2021-11-15T01:31:28Z) - OpenBot: Turning Smartphones into Robots [95.94432031144716]
Current robots are either expensive or make significant compromises on sensory richness, computational power, and communication capabilities.
We propose to leverage smartphones to equip robots with extensive sensor suites, powerful computational abilities, state-of-the-art communication channels, and access to a thriving software ecosystem.
We design a small electric vehicle that costs $50 and serves as a robot body for standard Android smartphones.
arXiv Detail & Related papers (2020-08-24T18:04:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.