Leveraging the Edge and Cloud for V2X-Based Real-Time Object Detection
in Autonomous Driving
- URL: http://arxiv.org/abs/2308.05234v1
- Date: Wed, 9 Aug 2023 21:39:10 GMT
- Title: Leveraging the Edge and Cloud for V2X-Based Real-Time Object Detection
in Autonomous Driving
- Authors: Faisal Hawlader, Fran\c{c}ois Robinet, and Rapha\"el Frank
- Abstract summary: Environmental perception is a key element of autonomous driving.
In this paper, we investigate the best trade-off between detection quality and latency for real-time perception in autonomous vehicles.
We show that models with adequate compression can be run in real-time on the cloud while outperforming local detection performance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Environmental perception is a key element of autonomous driving because the
information received from the perception module influences core driving
decisions. An outstanding challenge in real-time perception for autonomous
driving lies in finding the best trade-off between detection quality and
latency. Major constraints on both computation and power have to be taken into
account for real-time perception in autonomous vehicles. Larger object
detection models tend to produce the best results, but are also slower at
runtime. Since the most accurate detectors cannot run in real-time locally, we
investigate the possibility of offloading computation to edge and cloud
platforms, which are less resource-constrained. We create a synthetic dataset
to train object detection models and evaluate different offloading strategies.
Using real hardware and network simulations, we compare different trade-offs
between prediction quality and end-to-end delay. Since sending raw frames over
the network implies additional transmission delays, we also explore the use of
JPEG and H.265 compression at varying qualities and measure their impact on
prediction metrics. We show that models with adequate compression can be run in
real-time on the cloud while outperforming local detection performance.
Related papers
- Real-Time Pedestrian Detection on IoT Edge Devices: A Lightweight Deep Learning Approach [1.4732811715354455]
This research explores implementing a lightweight deep learning model on Artificial Intelligence of Things (AIoT) edge devices.
An optimized You Only Look Once (YOLO) based DL model is deployed for real-time pedestrian detection.
The simulation results demonstrate that the optimized YOLO model can achieve real-time pedestrian detection, with a fast inference speed of 147 milliseconds, a frame rate of 2.3 frames per second, and an accuracy of 78%.
arXiv Detail & Related papers (2024-09-24T04:48:41Z) - Is That Rain? Understanding Effects on Visual Odometry Performance for Autonomous UAVs and Efficient DNN-based Rain Classification at the Edge [1.8936798735951972]
State-of-the-art local tracking and trajectory planning are typically performed using camera sensor input to the flight control algorithm.
We show that a worst-case average tracking error of 1.5 m is possible for a state-of-the-art visual odometry system.
We train a set of deep neural network models suited to mobile and constrained deployment scenarios to determine the extent to which it may be possible to efficiently and accurately classify these rainy' conditions.
arXiv Detail & Related papers (2024-07-17T15:47:25Z) - Real-time Traffic Object Detection for Autonomous Driving [5.780326596446099]
Modern computer vision techniques tend to prioritize accuracy over efficiency.
Existing object detectors are far from being real-time.
We propose a more suitable alternative that incorporates real-time requirements.
arXiv Detail & Related papers (2024-01-31T19:12:56Z) - Rethinking Voxelization and Classification for 3D Object Detection [68.8204255655161]
The main challenge in 3D object detection from LiDAR point clouds is achieving real-time performance without affecting the reliability of the network.
We present a solution to improve network inference speed and precision at the same time by implementing a fast dynamic voxelizer.
In addition, we propose a lightweight detection sub-head model for classifying predicted objects and filter out false detected objects.
arXiv Detail & Related papers (2023-01-10T16:22:04Z) - DaDe: Delay-adaptive Detector for Streaming Perception [0.0]
In real-time environment, surrounding environment changes when processing is over.
Streaming perception is proposed to assess the latency and accuracy of real-time video perception.
We develop a model that can reflect processing delays in real time and produce the most reasonable results.
arXiv Detail & Related papers (2022-12-22T09:25:46Z) - StreamYOLO: Real-time Object Detection for Streaming Perception [84.2559631820007]
We endow the models with the capacity of predicting the future, significantly improving the results for streaming perception.
We consider multiple velocities driving scene and propose Velocity-awared streaming AP (VsAP) to jointly evaluate the accuracy.
Our simple method achieves the state-of-the-art performance on Argoverse-HD dataset and improves the sAP and VsAP by 4.7% and 8.2% respectively.
arXiv Detail & Related papers (2022-07-21T12:03:02Z) - Real-time Object Detection for Streaming Perception [84.2559631820007]
Streaming perception is proposed to jointly evaluate the latency and accuracy into a single metric for video online perception.
We build a simple and effective framework for streaming perception.
Our method achieves competitive performance on Argoverse-HD dataset and improves the AP by 4.9% compared to the strong baseline.
arXiv Detail & Related papers (2022-03-23T11:33:27Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Testing the Safety of Self-driving Vehicles by Simulating Perception and
Prediction [88.0416857308144]
We propose an alternative to sensor simulation, as sensor simulation is expensive and has large domain gaps.
We directly simulate the outputs of the self-driving vehicle's perception and prediction system, enabling realistic motion planning testing.
arXiv Detail & Related papers (2020-08-13T17:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.