EVReflex: Dense Time-to-Impact Prediction for Event-based Obstacle
Avoidance
- URL: http://arxiv.org/abs/2109.00405v1
- Date: Wed, 1 Sep 2021 14:34:20 GMT
- Title: EVReflex: Dense Time-to-Impact Prediction for Event-based Obstacle
Avoidance
- Authors: Celyn Walters and Simon Hadfield
- Abstract summary: We show that the fusion of events and depth overcomes the failure cases of each individual modality when performing obstacle avoidance.
Our proposed approach unifies event camera and lidar streams to estimate metric time-to-impact without prior knowledge of the scene geometry or obstacles.
- Score: 28.88113725832339
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The broad scope of obstacle avoidance has led to many kinds of computer
vision-based approaches. Despite its popularity, it is not a solved problem.
Traditional computer vision techniques using cameras and depth sensors often
focus on static scenes, or rely on priors about the obstacles. Recent
developments in bio-inspired sensors present event cameras as a compelling
choice for dynamic scenes. Although these sensors have many advantages over
their frame-based counterparts, such as high dynamic range and temporal
resolution, event-based perception has largely remained in 2D. This often leads
to solutions reliant on heuristics and specific to a particular task. We show
that the fusion of events and depth overcomes the failure cases of each
individual modality when performing obstacle avoidance. Our proposed approach
unifies event camera and lidar streams to estimate metric time-to-impact
without prior knowledge of the scene geometry or obstacles. In addition, we
release an extensive event-based dataset with six visual streams spanning over
700 scanned scenes.
Related papers
- EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting [76.02450110026747]
Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution.
We propose Event-Aided Free-Trajectory 3DGS, which seamlessly integrates the advantages of event cameras into 3DGS.
We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS.
arXiv Detail & Related papers (2024-10-20T13:44:24Z) - Self-supervised Event-based Monocular Depth Estimation using Cross-modal
Consistency [18.288912105820167]
We propose a self-supervised event-based monocular depth estimation framework named EMoDepth.
EMoDepth constrains the training process using the cross-modal consistency from intensity frames that are aligned with events in the pixel coordinate.
In inference, only events are used for monocular depth prediction.
arXiv Detail & Related papers (2024-01-14T07:16:52Z) - Deep Event Visual Odometry [40.57142632274148]
Event cameras offer the exciting possibility of tracking the camera's pose during high-speed motion.
Existing event-based monocular visual odometry approaches demonstrate limited performance on recent benchmarks.
We present Deep Event VO (DEVO), the first monocular event-only system with strong performance on a large number of real-world benchmarks.
arXiv Detail & Related papers (2023-12-15T14:00:00Z) - Pedestrian detection with high-resolution event camera [0.0]
Event cameras (DVS) are a potentially interesting technology to address the above mentioned problems.
In this paper, we compare two methods of processing event data by means of deep learning for the task of pedestrian detection.
We used a representation in the form of video frames, convolutional neural networks and asynchronous sparse convolutional neural networks.
arXiv Detail & Related papers (2023-05-29T10:57:59Z) - Event-based Simultaneous Localization and Mapping: A Comprehensive Survey [52.73728442921428]
Review of event-based vSLAM algorithms that exploit the benefits of asynchronous and irregular event streams for localization and mapping tasks.
Paper categorizes event-based vSLAM methods into four main categories: feature-based, direct, motion-compensation, and deep learning methods.
arXiv Detail & Related papers (2023-04-19T16:21:14Z) - Deep Learning for Event-based Vision: A Comprehensive Survey and Benchmarks [55.81577205593956]
Event cameras are bio-inspired sensors that capture the per-pixel intensity changes asynchronously.
Deep learning (DL) has been brought to this emerging field and inspired active research endeavors in mining its potential.
arXiv Detail & Related papers (2023-02-17T14:19:28Z) - Event-based Visual Tracking in Dynamic Environments [0.0]
We propose a framework to take advantage of both event cameras and off-the-shelf deep learning for object tracking.
We show that reconstructing event data into intensity frames improves the tracking performance in conditions under which conventional cameras fail to provide acceptable results.
arXiv Detail & Related papers (2022-12-15T12:18:13Z) - Event Guided Depth Sensing [50.997474285910734]
We present an efficient bio-inspired event-camera-driven depth estimation algorithm.
In our approach, we illuminate areas of interest densely, depending on the scene activity detected by the event camera.
We show the feasibility of our approach in a simulated autonomous driving sequences and real indoor environments.
arXiv Detail & Related papers (2021-10-20T11:41:11Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.