Deep Event-based Object Detection in Autonomous Driving: A Survey
- URL: http://arxiv.org/abs/2405.03995v1
- Date: Tue, 7 May 2024 04:17:04 GMT
- Title: Deep Event-based Object Detection in Autonomous Driving: A Survey
- Authors: Bingquan Zhou, Jie Jiang,
- Abstract summary: Event cameras have emerged as promising sensors for autonomous driving due to their low latency, high dynamic range, and low power consumption.
This paper provides an overview of object detection using event data in autonomous driving, showcasing the competitive benefits of event cameras.
- Score: 7.197775088663435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object detection plays a critical role in autonomous driving, where accurately and efficiently detecting objects in fast-moving scenes is crucial. Traditional frame-based cameras face challenges in balancing latency and bandwidth, necessitating the need for innovative solutions. Event cameras have emerged as promising sensors for autonomous driving due to their low latency, high dynamic range, and low power consumption. However, effectively utilizing the asynchronous and sparse event data presents challenges, particularly in maintaining low latency and lightweight architectures for object detection. This paper provides an overview of object detection using event data in autonomous driving, showcasing the competitive benefits of event cameras.
Related papers
- Research, Applications and Prospects of Event-Based Pedestrian Detection: A Survey [10.494414329120909]
Event-based cameras, inspired by the biological retina, have evolved into cutting-edge sensors distinguished by their minimal power requirements, negligible latency, superior temporal resolution, and expansive dynamic range.
Event-based cameras address limitations by eschewing extraneous data transmissions and obviating motion blur in high-speed imaging scenarios.
This paper offers an exhaustive review of research and applications particularly in the autonomous driving context.
arXiv Detail & Related papers (2024-07-05T06:17:00Z) - PNAS-MOT: Multi-Modal Object Tracking with Pareto Neural Architecture Search [64.28335667655129]
Multiple object tracking is a critical task in autonomous driving.
As tracking accuracy improves, neural networks become increasingly complex, posing challenges for their practical application in real driving scenarios due to the high level of latency.
In this paper, we explore the use of the neural architecture search (NAS) methods to search for efficient architectures for tracking, aiming for low real-time latency while maintaining relatively high accuracy.
arXiv Detail & Related papers (2024-03-23T04:18:49Z) - Real-time Traffic Object Detection for Autonomous Driving [5.780326596446099]
Modern computer vision techniques tend to prioritize accuracy over efficiency.
Existing object detectors are far from being real-time.
We propose a more suitable alternative that incorporates real-time requirements.
arXiv Detail & Related papers (2024-01-31T19:12:56Z) - SpikeMOT: Event-based Multi-Object Tracking with Sparse Motion Features [52.213656737672935]
SpikeMOT is an event-based multi-object tracker.
SpikeMOT uses spiking neural networks to extract sparsetemporal features from event streams associated with objects.
arXiv Detail & Related papers (2023-09-29T05:13:43Z) - DOTIE -- Detecting Objects through Temporal Isolation of Events using a
Spiking Architecture [5.340730281227837]
Vision-based autonomous navigation systems rely on fast and accurate object detection algorithms to avoid obstacles.
We propose a novel technique that utilizes the temporal information inherently present in the events to efficiently detect moving objects.
We show that by utilizing our architecture, autonomous navigation systems can have minimal latency and energy overheads for performing object detection.
arXiv Detail & Related papers (2022-10-03T14:43:11Z) - Extrinsic Camera Calibration with Semantic Segmentation [60.330549990863624]
We present an extrinsic camera calibration approach that automatizes the parameter estimation by utilizing semantic segmentation information.
Our approach relies on a coarse initial measurement of the camera pose and builds on lidar sensors mounted on a vehicle.
We evaluate our method on simulated and real-world data to demonstrate low error measurements in the calibration results.
arXiv Detail & Related papers (2022-08-08T07:25:03Z) - Scalable and Real-time Multi-Camera Vehicle Detection,
Re-Identification, and Tracking [58.95210121654722]
We propose a real-time city-scale multi-camera vehicle tracking system that handles real-world, low-resolution CCTV instead of idealized and curated video streams.
Our method is ranked among the top five performers on the public leaderboard.
arXiv Detail & Related papers (2022-04-15T12:47:01Z) - A Quality Index Metric and Method for Online Self-Assessment of
Autonomous Vehicles Sensory Perception [164.93739293097605]
We propose a novel evaluation metric, named as the detection quality index (DQI), which assesses the performance of camera-based object detection algorithms.
We have developed a superpixel-based attention network (SPA-NET) that utilizes raw image pixels and superpixels as input to predict the proposed DQI evaluation metric.
arXiv Detail & Related papers (2022-03-04T22:16:50Z) - Moving Object Detection for Event-based Vision using k-means Clustering [0.0]
Moving object detection is a crucial task in computer vision.
Event-based cameras are bio-inspired cameras that work by mimicking the working of the human eye.
In this paper, we investigate the application of the k-means clustering technique in detecting moving objects in event-based data.
arXiv Detail & Related papers (2021-09-04T14:43:14Z) - Achieving Real-Time LiDAR 3D Object Detection on a Mobile Device [53.323878851563414]
We propose a compiler-aware unified framework incorporating network enhancement and pruning search with the reinforcement learning techniques.
Specifically, a generator Recurrent Neural Network (RNN) is employed to provide the unified scheme for both network enhancement and pruning search automatically.
The proposed framework achieves real-time 3D object detection on mobile devices with competitive detection performance.
arXiv Detail & Related papers (2020-12-26T19:41:15Z) - Night vision obstacle detection and avoidance based on Bio-Inspired
Vision Sensors [0.5079840826943617]
We exploit the powerful attributes of event-based cameras to perform obstacle detection in low lighting conditions.
The algorithm filters background activity noise and extracts objects using robust Hough transform technique.
The depth of each detected object is computed by triangulating 2D features extracted utilising LC-Harris.
arXiv Detail & Related papers (2020-10-29T12:02:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.