Autonomous Navigation of Micro Air Vehicles in Warehouses Using
Vision-based Line Following
- URL: http://arxiv.org/abs/2310.00950v1
- Date: Mon, 2 Oct 2023 07:43:51 GMT
- Title: Autonomous Navigation of Micro Air Vehicles in Warehouses Using
Vision-based Line Following
- Authors: Ling Shuang Soh, and Hann Woei Ho
- Abstract summary: We propose a vision-based solution for indoor Micro Air Vehicle (MAV) navigation, with a primary focus on its application within autonomous warehouses.
Our work centers on the utilization of a single camera as the primary sensor for tasks such as detection, localization, and path planning.
- Score: 1.0128808054306186
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a vision-based solution for indoor Micro Air
Vehicle (MAV) navigation, with a primary focus on its application within
autonomous warehouses. Our work centers on the utilization of a single camera
as the primary sensor for tasks such as detection, localization, and path
planning. To achieve these objectives, we implement the HSV color detection and
the Hough Line Transform for effective line detection within warehouse
environments. The integration of a Kalman filter into our system enables the
camera to track yellow lines reliably. We evaluated the performance of our
vision-based line following algorithm through various MAV flight tests
conducted in the Gazebo 11 platform, utilizing ROS Noetic. The results of these
simulations demonstrate the system capability to successfully navigate narrow
indoor spaces. Our proposed system has the potential to significantly reduce
labor costs and enhance overall productivity in warehouse operations. This work
contributes to the growing field of MAV applications in autonomous warehouses,
addressing the need for efficient logistics and supply chain solutions.
Related papers
- WLTCL: Wide Field-of-View 3-D LiDAR Truck Compartment Automatic Localization System [9.07574138083974]
We propose an innovative wide field-of-view 3-D LiDAR vehicle compartment automatic localization system.
For vehicles of various sizes, this system leverages the LiDAR to generate high-density point clouds within an extensive field-of-view range.
Our compartment key point positioning algorithm utilizes the geometric features of the compartments to accurately locate the corner points.
arXiv Detail & Related papers (2025-04-26T09:35:47Z) - Benchmarking Online Object Trackers for Underwater Robot Position Locking Applications [0.9642500063568189]
Vision-based underwater robot navigation and control has recently gained increasing attention to counter the numerous challenges faced in underwater conditions.
In this paper, we propose a first rigorous unified benchmarking of more than seven Machine Learning (ML)-based one-shot object tracking algorithms for vision-based position locking of ROV platforms.
Our proposed system uses the output result of different object tracking algorithms to automatically correct the position of the ROV against external disturbances.
arXiv Detail & Related papers (2025-02-23T13:27:34Z) - A Cross-Scene Benchmark for Open-World Drone Active Tracking [54.235808061746525]
Drone Visual Active Tracking aims to autonomously follow a target object by controlling the motion system based on visual observations.
We propose a unified cross-scene cross-domain benchmark for open-world drone active tracking called DAT.
We also propose a reinforcement learning-based drone tracking method called R-VAT.
arXiv Detail & Related papers (2024-12-01T09:37:46Z) - Enhancing Autonomous Navigation by Imaging Hidden Objects using Single-Photon LiDAR [12.183773707869069]
We present a novel approach that leverages Non-Line-of-Sight (NLOS) sensing using single-photon LiDAR to improve visibility and enhance autonomous navigation.
Our method enables mobile robots to "see around corners" by utilizing multi-bounce light information.
arXiv Detail & Related papers (2024-10-04T16:03:13Z) - ORBSLAM3-Enhanced Autonomous Toy Drones: Pioneering Indoor Exploration [30.334482597992455]
Navigating toy drones through uncharted GPS-denied indoor spaces poses significant difficulties.
We introduce a real-time autonomous indoor exploration system tailored for drones equipped with a monocular emphRGB camera.
Our system utilizes emphORB-SLAM3, a state-of-the-art vision feature-based SLAM, to handle both the localization of toy drones and the mapping of unmapped indoor terrains.
arXiv Detail & Related papers (2023-12-20T19:20:26Z) - MSight: An Edge-Cloud Infrastructure-based Perception System for
Connected Automated Vehicles [58.461077944514564]
This paper presents MSight, a cutting-edge roadside perception system specifically designed for automated vehicles.
MSight offers real-time vehicle detection, localization, tracking, and short-term trajectory prediction.
Evaluations underscore the system's capability to uphold lane-level accuracy with minimal latency.
arXiv Detail & Related papers (2023-10-08T21:32:30Z) - Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue
with Autonomous Heterogeneous Robotic Systems [56.838297900091426]
Smoke and dust affect the performance of any mobile robotic platform due to their reliance on onboard perception systems.
This paper proposes a novel modular computation filtration pipeline based on intensity and spatial information.
arXiv Detail & Related papers (2023-08-14T16:48:57Z) - ADAPT: An Open-Source sUAS Payload for Real-Time Disaster Prediction and
Response with AI [55.41644538483948]
Small unmanned aircraft systems (sUAS) are becoming prominent components of many humanitarian assistance and disaster response operations.
We have developed the free and open-source ADAPT multi-mission payload for deploying real-time AI and computer vision onboard a sUAS.
We demonstrate the example mission of real-time, in-flight ice segmentation to monitor river ice state and provide timely predictions of catastrophic flooding events.
arXiv Detail & Related papers (2022-01-25T14:51:19Z) - Polyline Based Generative Navigable Space Segmentation for Autonomous
Visual Navigation [57.3062528453841]
We propose a representation-learning-based framework to enable robots to learn the navigable space segmentation in an unsupervised manner.
We show that the proposed PSV-Nets can learn the visual navigable space with high accuracy, even without any single label.
arXiv Detail & Related papers (2021-10-29T19:50:48Z) - A Multi-UAV System for Exploration and Target Finding in Cluttered and
GPS-Denied Environments [68.31522961125589]
We propose a framework for a team of UAVs to cooperatively explore and find a target in complex GPS-denied environments with obstacles.
The team of UAVs autonomously navigates, explores, detects, and finds the target in a cluttered environment with a known map.
Results indicate that the proposed multi-UAV system has improvements in terms of time-cost, the proportion of search area surveyed, as well as successful rates for search and rescue missions.
arXiv Detail & Related papers (2021-07-19T12:54:04Z) - Simultaneous Navigation and Radio Mapping for Cellular-Connected UAV
with Deep Reinforcement Learning [46.55077580093577]
How to achieve ubiquitous 3D communication coverage for UAVs in the sky is a new challenge.
We propose a new coverage-aware navigation approach, which exploits the UAV's controllable mobility to design its navigation/trajectory.
We propose a new framework called simultaneous navigation and radio mapping (SNARM), where the UAV's signal measurement is used to train the deep Q network.
arXiv Detail & Related papers (2020-03-17T08:16:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.