FAITH: Fast iterative half-plane focus of expansion estimation using
event-based optic flow
- URL: http://arxiv.org/abs/2102.12823v1
- Date: Thu, 25 Feb 2021 12:49:02 GMT
- Title: FAITH: Fast iterative half-plane focus of expansion estimation using
event-based optic flow
- Authors: Raoul Dinaux, Nikhil Wessendorp, Julien Dupeyroux, Guido de Croon
- Abstract summary: This study proposes the FAst ITerative Half-plane (FAITH) method to determine the course of a micro air vehicle (MAV)
Results show that the computational efficiency of our solution outperforms state-of-the-art methods while keeping a high level of accuracy.
- Score: 3.326320568999945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Course estimation is a key component for the development of autonomous
navigation systems for robots. While state-of-the-art methods widely use
visual-based algorithms, it is worth noting that they all fail to deal with the
complexity of the real world by being computationally greedy and sometimes too
slow. They often require obstacles to be highly textured to improve the overall
performance, particularly when the obstacle is located within the focus of
expansion (FOE) where the optic flow (OF) is almost null. This study proposes
the FAst ITerative Half-plane (FAITH) method to determine the course of a micro
air vehicle (MAV). This is achieved by means of an event-based camera, along
with a fast RANSAC-based algorithm that uses event-based OF to determine the
FOE. The performance is validated by means of a benchmark on a simulated
environment and then tested on a dataset collected for indoor obstacle
avoidance. Our results show that the computational efficiency of our solution
outperforms state-of-the-art methods while keeping a high level of accuracy.
This has been further demonstrated onboard an MAV equipped with an event-based
camera, showing that our event-based FOE estimation can be achieved online
onboard tiny drones, thus opening the path towards fully neuromorphic solutions
for autonomous obstacle avoidance and navigation onboard MAVs.
Related papers
- MPVO: Motion-Prior based Visual Odometry for PointGoal Navigation [3.9974562667271507]
Visual odometry (VO) is essential for enabling accurate point-goal navigation of embodied agents in indoor environments.
Recent deep-learned VO methods show robust performance but suffer from sample inefficiency during training.
We propose a robust and sample-efficient VO pipeline based on motion priors available while an agent is navigating an environment.
arXiv Detail & Related papers (2024-11-07T15:36:49Z) - Ensuring UAV Safety: A Vision-only and Real-time Framework for Collision Avoidance Through Object Detection, Tracking, and Distance Estimation [16.671696289301625]
This paper presents a deep-learning framework that utilizes optical sensors for the detection, tracking, and distance estimation of non-cooperative aerial vehicles.
In this work, we propose a method for estimating the distance information of a detected aerial object in real time using only the input of a monocular camera.
arXiv Detail & Related papers (2024-05-10T18:06:41Z) - Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue
with Autonomous Heterogeneous Robotic Systems [56.838297900091426]
Smoke and dust affect the performance of any mobile robotic platform due to their reliance on onboard perception systems.
This paper proposes a novel modular computation filtration pipeline based on intensity and spatial information.
arXiv Detail & Related papers (2023-08-14T16:48:57Z) - Performance Study of YOLOv5 and Faster R-CNN for Autonomous Navigation
around Non-Cooperative Targets [0.0]
This paper discusses how the combination of cameras and machine learning algorithms can achieve the relative navigation task.
The performance of two deep learning-based object detection algorithms, Faster Region-based Convolutional Neural Networks (R-CNN) and You Only Look Once (YOLOv5) is tested.
The paper discusses the path to implementing the feature recognition algorithms and towards integrating them into the spacecraft Guidance Navigation and Control system.
arXiv Detail & Related papers (2023-01-22T04:53:38Z) - Globally Optimal Event-Based Divergence Estimation for Ventral Landing [55.29096494880328]
Event sensing is a major component in bio-inspired flight guidance and control systems.
We explore the usage of event cameras for predicting time-to-contact with the surface during ventral landing.
This is achieved by estimating divergence (inverse TTC), which is the rate of radial optic flow, from the event stream generated during landing.
Our core contributions are a novel contrast maximisation formulation for event-based divergence estimation, and a branch-and-bound algorithm to exactly maximise contrast and find the optimal divergence value.
arXiv Detail & Related papers (2022-09-27T06:00:52Z) - StreamYOLO: Real-time Object Detection for Streaming Perception [84.2559631820007]
We endow the models with the capacity of predicting the future, significantly improving the results for streaming perception.
We consider multiple velocities driving scene and propose Velocity-awared streaming AP (VsAP) to jointly evaluate the accuracy.
Our simple method achieves the state-of-the-art performance on Argoverse-HD dataset and improves the sAP and VsAP by 4.7% and 8.2% respectively.
arXiv Detail & Related papers (2022-07-21T12:03:02Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z) - End-to-end Learning for Inter-Vehicle Distance and Relative Velocity
Estimation in ADAS with a Monocular Camera [81.66569124029313]
We propose a camera-based inter-vehicle distance and relative velocity estimation method based on end-to-end training of a deep neural network.
The key novelty of our method is the integration of multiple visual clues provided by any two time-consecutive monocular frames.
We also propose a vehicle-centric sampling mechanism to alleviate the effect of perspective distortion in the motion field.
arXiv Detail & Related papers (2020-06-07T08:18:31Z) - Reinforcement Learning for UAV Autonomous Navigation, Mapping and Target
Detection [36.79380276028116]
We study a joint detection, mapping and navigation problem for a single unmanned aerial vehicle (UAV) equipped with a low complexity radar and flying in an unknown environment.
The goal is to optimize its trajectory with the purpose of maximizing the mapping accuracy and to avoid areas where measurements might not be sufficiently informative from the perspective of a target detection.
arXiv Detail & Related papers (2020-05-05T20:39:18Z) - Congestion-aware Evacuation Routing using Augmented Reality Devices [96.68280427555808]
We present a congestion-aware routing solution for indoor evacuation, which produces real-time individual-customized evacuation routes among multiple destinations.
A population density map, obtained on-the-fly by aggregating locations of evacuees from user-end Augmented Reality (AR) devices, is used to model the congestion distribution inside a building.
arXiv Detail & Related papers (2020-04-25T22:54:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.