FAITH: Fast iterative half-plane focus of expansion estimation using
event-based optic flow
- URL: http://arxiv.org/abs/2102.12823v1
- Date: Thu, 25 Feb 2021 12:49:02 GMT
- Title: FAITH: Fast iterative half-plane focus of expansion estimation using
event-based optic flow
- Authors: Raoul Dinaux, Nikhil Wessendorp, Julien Dupeyroux, Guido de Croon
- Abstract summary: This study proposes the FAst ITerative Half-plane (FAITH) method to determine the course of a micro air vehicle (MAV)
Results show that the computational efficiency of our solution outperforms state-of-the-art methods while keeping a high level of accuracy.
- Score: 3.326320568999945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Course estimation is a key component for the development of autonomous
navigation systems for robots. While state-of-the-art methods widely use
visual-based algorithms, it is worth noting that they all fail to deal with the
complexity of the real world by being computationally greedy and sometimes too
slow. They often require obstacles to be highly textured to improve the overall
performance, particularly when the obstacle is located within the focus of
expansion (FOE) where the optic flow (OF) is almost null. This study proposes
the FAst ITerative Half-plane (FAITH) method to determine the course of a micro
air vehicle (MAV). This is achieved by means of an event-based camera, along
with a fast RANSAC-based algorithm that uses event-based OF to determine the
FOE. The performance is validated by means of a benchmark on a simulated
environment and then tested on a dataset collected for indoor obstacle
avoidance. Our results show that the computational efficiency of our solution
outperforms state-of-the-art methods while keeping a high level of accuracy.
This has been further demonstrated onboard an MAV equipped with an event-based
camera, showing that our event-based FOE estimation can be achieved online
onboard tiny drones, thus opening the path towards fully neuromorphic solutions
for autonomous obstacle avoidance and navigation onboard MAVs.
Related papers
- Vision-Based Deep Reinforcement Learning of UAV Autonomous Navigation Using Privileged Information [6.371251946803415]
DPRL is an end-to-end policy designed to address the challenge of high-speed autonomous UAV navigation under partially observable environmental conditions.
We leverage an asymmetric Actor-Critic architecture to provide the agent with privileged information during training.
We conduct extensive simulations across various scenarios, benchmarking our DPRL algorithm against the state-of-the-art navigation algorithms.
arXiv Detail & Related papers (2024-12-09T09:05:52Z) - A Cross-Scene Benchmark for Open-World Drone Active Tracking [54.235808061746525]
Drone Visual Active Tracking aims to autonomously follow a target object by controlling the motion system based on visual observations.
We propose a unified cross-scene cross-domain benchmark for open-world drone active tracking called DAT.
We also propose a reinforcement learning-based drone tracking method called R-VAT.
arXiv Detail & Related papers (2024-12-01T09:37:46Z) - Monocular Obstacle Avoidance Based on Inverse PPO for Fixed-wing UAVs [29.207513994002202]
Fixed-wing Unmanned Aerial Vehicles (UAVs) are one of the most commonly used platforms for the Low-altitude Economy (LAE) and Urban Air Mobility (UAM)
Classical obstacle avoidance systems, which rely on prior maps or sophisticated sensors, face limitations in unknown low-altitude environments and small UAV platforms.
This paper proposes a lightweight deep reinforcement learning (DRL) based UAV collision avoidance system.
arXiv Detail & Related papers (2024-11-27T03:03:37Z) - MPVO: Motion-Prior based Visual Odometry for PointGoal Navigation [3.9974562667271507]
Visual odometry (VO) is essential for enabling accurate point-goal navigation of embodied agents in indoor environments.
Recent deep-learned VO methods show robust performance but suffer from sample inefficiency during training.
We propose a robust and sample-efficient VO pipeline based on motion priors available while an agent is navigating an environment.
arXiv Detail & Related papers (2024-11-07T15:36:49Z) - Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue
with Autonomous Heterogeneous Robotic Systems [56.838297900091426]
Smoke and dust affect the performance of any mobile robotic platform due to their reliance on onboard perception systems.
This paper proposes a novel modular computation filtration pipeline based on intensity and spatial information.
arXiv Detail & Related papers (2023-08-14T16:48:57Z) - Performance Study of YOLOv5 and Faster R-CNN for Autonomous Navigation
around Non-Cooperative Targets [0.0]
This paper discusses how the combination of cameras and machine learning algorithms can achieve the relative navigation task.
The performance of two deep learning-based object detection algorithms, Faster Region-based Convolutional Neural Networks (R-CNN) and You Only Look Once (YOLOv5) is tested.
The paper discusses the path to implementing the feature recognition algorithms and towards integrating them into the spacecraft Guidance Navigation and Control system.
arXiv Detail & Related papers (2023-01-22T04:53:38Z) - Globally Optimal Event-Based Divergence Estimation for Ventral Landing [55.29096494880328]
Event sensing is a major component in bio-inspired flight guidance and control systems.
We explore the usage of event cameras for predicting time-to-contact with the surface during ventral landing.
This is achieved by estimating divergence (inverse TTC), which is the rate of radial optic flow, from the event stream generated during landing.
Our core contributions are a novel contrast maximisation formulation for event-based divergence estimation, and a branch-and-bound algorithm to exactly maximise contrast and find the optimal divergence value.
arXiv Detail & Related papers (2022-09-27T06:00:52Z) - StreamYOLO: Real-time Object Detection for Streaming Perception [84.2559631820007]
We endow the models with the capacity of predicting the future, significantly improving the results for streaming perception.
We consider multiple velocities driving scene and propose Velocity-awared streaming AP (VsAP) to jointly evaluate the accuracy.
Our simple method achieves the state-of-the-art performance on Argoverse-HD dataset and improves the sAP and VsAP by 4.7% and 8.2% respectively.
arXiv Detail & Related papers (2022-07-21T12:03:02Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z) - End-to-end Learning for Inter-Vehicle Distance and Relative Velocity
Estimation in ADAS with a Monocular Camera [81.66569124029313]
We propose a camera-based inter-vehicle distance and relative velocity estimation method based on end-to-end training of a deep neural network.
The key novelty of our method is the integration of multiple visual clues provided by any two time-consecutive monocular frames.
We also propose a vehicle-centric sampling mechanism to alleviate the effect of perspective distortion in the motion field.
arXiv Detail & Related papers (2020-06-07T08:18:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.