Self-Configurable Stabilized Real-Time Detection Learning for Autonomous
Driving Applications
- URL: http://arxiv.org/abs/2209.14525v1
- Date: Thu, 29 Sep 2022 03:11:33 GMT
- Title: Self-Configurable Stabilized Real-Time Detection Learning for Autonomous
Driving Applications
- Authors: Won Joon Yun, Soohyun Park, Joongheon Kim, David Mohaisen
- Abstract summary: We improve the performance of an object detection neural network utilizing optical flow estimation.
It adaptively determines whether to use optical flow to suit the dynamic vehicle environment.
In the demonstration, our proposed framework improves the accuracy by 3.02%, the number of detected objects by 59.6%, and the queue stability for computing.
- Score: 15.689145350449737
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Guaranteeing real-time and accurate object detection simultaneously is
paramount in autonomous driving environments. However, the existing object
detection neural network systems are characterized by a tradeoff between
computation time and accuracy, making it essential to optimize such a tradeoff.
Fortunately, in many autonomous driving environments, images come in a
continuous form, providing an opportunity to use optical flow. In this paper,
we improve the performance of an object detection neural network utilizing
optical flow estimation. In addition, we propose a Lyapunov optimization
framework for time-average performance maximization subject to stability. It
adaptively determines whether to use optical flow to suit the dynamic vehicle
environment, thereby ensuring the vehicle's queue stability and the
time-average maximum performance simultaneously. To verify the key ideas, we
conduct numerical experiments with various object detection neural networks and
optical flow estimation networks. In addition, we demonstrate the
self-configurable stabilized detection with YOLOv3-tiny and FlowNet2-S, which
are the real-time object detection network and an optical flow estimation
network, respectively. In the demonstration, our proposed framework improves
the accuracy by 3.02%, the number of detected objects by 59.6%, and the queue
stability for computing capabilities.
Related papers
- Optical Flow Matters: an Empirical Comparative Study on Fusing Monocular Extracted Modalities for Better Steering [37.46760714516923]
This research introduces a new end-to-end method that exploits multimodal information from a single monocular camera to improve the steering predictions for self-driving cars.
By focusing on the fusion of RGB imagery with depth completion information or optical flow data, we propose a framework that integrates these modalities through both early and hybrid fusion techniques.
arXiv Detail & Related papers (2024-09-18T09:36:24Z) - Event-Aided Time-to-Collision Estimation for Autonomous Driving [28.13397992839372]
We present a novel method that estimates the time to collision using a neuromorphic event-based camera.
The proposed algorithm consists of a two-step approach for efficient and accurate geometric model fitting on event data.
Experiments on both synthetic and real data demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2024-07-10T02:37:36Z) - PNAS-MOT: Multi-Modal Object Tracking with Pareto Neural Architecture Search [64.28335667655129]
Multiple object tracking is a critical task in autonomous driving.
As tracking accuracy improves, neural networks become increasingly complex, posing challenges for their practical application in real driving scenarios due to the high level of latency.
In this paper, we explore the use of the neural architecture search (NAS) methods to search for efficient architectures for tracking, aiming for low real-time latency while maintaining relatively high accuracy.
arXiv Detail & Related papers (2024-03-23T04:18:49Z) - Neuromorphic Optical Flow and Real-time Implementation with Event
Cameras [47.11134388304464]
We build on the latest developments in event-based vision and spiking neural networks.
We propose a new network architecture that improves the state-of-the-art self-supervised optical flow accuracy.
We demonstrate high speed optical flow prediction with almost two orders of magnitude reduced complexity.
arXiv Detail & Related papers (2023-04-14T14:03:35Z) - Rethinking Voxelization and Classification for 3D Object Detection [68.8204255655161]
The main challenge in 3D object detection from LiDAR point clouds is achieving real-time performance without affecting the reliability of the network.
We present a solution to improve network inference speed and precision at the same time by implementing a fast dynamic voxelizer.
In addition, we propose a lightweight detection sub-head model for classifying predicted objects and filter out false detected objects.
arXiv Detail & Related papers (2023-01-10T16:22:04Z) - StreamYOLO: Real-time Object Detection for Streaming Perception [84.2559631820007]
We endow the models with the capacity of predicting the future, significantly improving the results for streaming perception.
We consider multiple velocities driving scene and propose Velocity-awared streaming AP (VsAP) to jointly evaluate the accuracy.
Our simple method achieves the state-of-the-art performance on Argoverse-HD dataset and improves the sAP and VsAP by 4.7% and 8.2% respectively.
arXiv Detail & Related papers (2022-07-21T12:03:02Z) - Accurate and Real-time Pseudo Lidar Detection: Is Stereo Neural Network
Really Necessary? [6.8067583993953775]
We develop a system with a less powerful stereo matching predictor and adopt the proposed refinement schemes to improve the accuracy.
The presented system achieves competitive accuracy to the state-of-the-art approaches with only 23 ms computing, showing it is a suitable candidate for deploying to real car-hold applications.
arXiv Detail & Related papers (2022-06-28T09:53:00Z) - Real-time Object Detection for Streaming Perception [84.2559631820007]
Streaming perception is proposed to jointly evaluate the latency and accuracy into a single metric for video online perception.
We build a simple and effective framework for streaming perception.
Our method achieves competitive performance on Argoverse-HD dataset and improves the AP by 4.9% compared to the strong baseline.
arXiv Detail & Related papers (2022-03-23T11:33:27Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - Robust Ego and Object 6-DoF Motion Estimation and Tracking [5.162070820801102]
This paper proposes a robust solution to achieve accurate estimation and consistent track-ability for dynamic multi-body visual odometry.
A compact and effective framework is proposed leveraging recent advances in semantic instance-level segmentation and accurate optical flow estimation.
A novel formulation, jointly optimizing SE(3) motion and optical flow is introduced that improves the quality of the tracked points and the motion estimation accuracy.
arXiv Detail & Related papers (2020-07-28T05:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.