Autonomous Aerial Robot for High-Speed Search and Intercept Applications
- URL: http://arxiv.org/abs/2112.05465v1
- Date: Fri, 10 Dec 2021 11:49:51 GMT
- Title: Autonomous Aerial Robot for High-Speed Search and Intercept Applications
- Authors: Alejandro Rodriguez-Ramos, Adrian Alvarez-Fernandez Hriday Bavle,
Javier Rodriguez-Vazquez, Liang Lu Miguel Fernandez-Cortizas, Ramon A. Suarez
Fernandez, Alberto Rodelgo, Carlos Santos, Martin Molina, Luis Merino,
Fernando Caballero and Pascual Campoy
- Abstract summary: A fully-autonomous aerial robot for high-speed object grasping has been proposed.
As an additional sub-task, our system is able to autonomously pierce balloons located in poles close to the surface.
Our approach has been validated in a challenging international competition and has shown outstanding results.
- Score: 86.72321289033562
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In recent years, high-speed navigation and environment interaction in the
context of aerial robotics has become a field of interest for several academic
and industrial research studies. In particular, Search and Intercept (SaI)
applications for aerial robots pose a compelling research area due to their
potential usability in several environments. Nevertheless, SaI tasks involve a
challenging development regarding sensory weight, on-board computation
resources, actuation design and algorithms for perception and control, among
others. In this work, a fully-autonomous aerial robot for high-speed object
grasping has been proposed. As an additional sub-task, our system is able to
autonomously pierce balloons located in poles close to the surface. Our first
contribution is the design of the aerial robot at an actuation and sensory
level consisting of a novel gripper design with additional sensors enabling the
robot to grasp objects at high speeds. The second contribution is a complete
software framework consisting of perception, state estimation, motion planning,
motion control and mission control in order to rapid- and robustly perform the
autonomous grasping mission. Our approach has been validated in a challenging
international competition and has shown outstanding results, being able to
autonomously search, follow and grasp a moving object at 6 m/s in an outdoor
environment
Related papers
- Language-guided Robust Navigation for Mobile Robots in Dynamically-changing Environments [26.209402619114353]
We develop an embodied AI system for human-in-the-loop navigation with a wheeled mobile robot.
We propose a method of monitoring the robot's current plan to detect changes in the environment that impact the intended trajectory of the robot.
This work can support applications like precision agriculture and construction, where persistent monitoring of the environment provides a human with information about the environment state.
arXiv Detail & Related papers (2024-09-28T21:30:23Z) - Learning and Adapting Agile Locomotion Skills by Transferring Experience [71.8926510772552]
We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks.
We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments.
arXiv Detail & Related papers (2023-04-19T17:37:54Z) - SpaceYOLO: A Human-Inspired Model for Real-time, On-board Spacecraft
Feature Detection [0.0]
Real-time, automated spacecraft feature recognition is needed to pinpoint the locations of collision hazards.
New algorithm SpaceYOLO fuses a state-of-the-art object detector YOLOv5 with a separate neural network based on human-inspired decision processes.
Performance in autonomous spacecraft detection of SpaceYOLO is compared to ordinary YOLOv5 in hardware-in-the-loop experiments.
arXiv Detail & Related papers (2023-02-02T02:11:39Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z) - Machine Learning-Based Automated Design Space Exploration for Autonomous
Aerial Robots [55.056709056795206]
Building domain-specific architectures for autonomous aerial robots is challenging due to a lack of systematic methodology for designing onboard compute.
We introduce a novel performance model called the F-1 roofline to help architects understand how to build a balanced computing system.
To navigate the cyber-physical design space automatically, we subsequently introduce AutoPilot.
arXiv Detail & Related papers (2021-02-05T03:50:54Z) - High-Speed Robot Navigation using Predicted Occupancy Maps [0.0]
We study algorithmic approaches that allow the robot to predict spaces extending beyond the sensor horizon for robust planning at high speeds.
We accomplish this using a generative neural network trained from real-world data without requiring human annotated labels.
We extend our existing control algorithms to support leveraging the predicted spaces to improve collision-free planning and navigation at high speeds.
arXiv Detail & Related papers (2020-12-22T16:25:12Z) - Task-relevant Representation Learning for Networked Robotic Perception [74.0215744125845]
This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model's ultimate objective.
Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods.
arXiv Detail & Related papers (2020-11-06T07:39:08Z) - Towards Multi-Robot Task-Motion Planning for Navigation in Belief Space [1.4824891788575418]
We present an integrated multi-robot task-motion planning framework for navigation in knowledge-intensive domains.
In particular, we consider a distributed multi-robot setting incorporating mutual observations between the robots.
The framework is intended for motion planning under motion and sensing uncertainty, which is formally known as belief space planning.
arXiv Detail & Related papers (2020-10-01T06:45:17Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.