Fast Vehicle Detection and Tracking on Fisheye Traffic Monitoring Video
using CNN and Bounding Box Propagation
- URL: http://arxiv.org/abs/2207.01183v1
- Date: Mon, 4 Jul 2022 03:55:19 GMT
- Title: Fast Vehicle Detection and Tracking on Fisheye Traffic Monitoring Video
using CNN and Bounding Box Propagation
- Authors: Sandy Ardianto, Hsueh-Ming Hang, Wen-Huang Cheng (National Yang Ming
Chiao Tung University)
- Abstract summary: We design a fast car detection and tracking algorithm for traffic monitoring fisheye video mounted on crossroads.
To speed up, the grayscale frame difference is used for the intermediate frames in a segment, which can double the processing speed.
- Score: 5.366354612549172
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We design a fast car detection and tracking algorithm for traffic monitoring
fisheye video mounted on crossroads. We use ICIP 2020 VIP Cup dataset and adopt
YOLOv5 as the object detection base model. The nighttime video of this dataset
is very challenging, and the detection accuracy (AP50) of the base model is
about 54%. We design a reliable car detection and tracking algorithm based on
the concept of bounding box propagation among frames, which provides 17.9
percentage points (pp) and 7 pp accuracy improvement over the base model for
the nighttime and daytime videos, respectively. To speed up, the grayscale
frame difference is used for the intermediate frames in a segment, which can
double the processing speed.
Related papers
- Nighttime Driver Behavior Prediction Using Taillight Signal Recognition
via CNN-SVM Classifier [2.44755919161855]
This paper aims to enhance the ability to predict nighttime driving behavior by identifying taillights of both human-driven and autonomous vehicles.
The proposed model incorporates a customized detector designed to accurately detect front-vehicle taillights on the road.
To address the limited nighttime data, a unique pixel-wise image processing technique is implemented to convert daytime images into realistic night images.
arXiv Detail & Related papers (2023-10-25T15:23:33Z) - So you think you can track? [37.25914081637133]
This work introduces a multi-camera tracking dataset consisting of 234 hours of video data recorded concurrently from 234 HD cameras covering a 4.2 mile stretch of 8-10 lane interstate highway near Nashville, TN.
The video is recorded during a period of high traffic density with 500+ objects typically visible within the scene and typical object longevities of 3-15 minutes.
GPS trajectories from 270 vehicle passes through the scene are manually corrected in the video data to provide a set of ground-truth trajectories for recall-oriented tracking metrics.
arXiv Detail & Related papers (2023-09-13T19:18:18Z) - StreamYOLO: Real-time Object Detection for Streaming Perception [84.2559631820007]
We endow the models with the capacity of predicting the future, significantly improving the results for streaming perception.
We consider multiple velocities driving scene and propose Velocity-awared streaming AP (VsAP) to jointly evaluate the accuracy.
Our simple method achieves the state-of-the-art performance on Argoverse-HD dataset and improves the sAP and VsAP by 4.7% and 8.2% respectively.
arXiv Detail & Related papers (2022-07-21T12:03:02Z) - Cross-Camera Trajectories Help Person Retrieval in a Camera Network [124.65912458467643]
Existing methods often rely on purely visual matching or consider temporal constraints but ignore the spatial information of the camera network.
We propose a pedestrian retrieval framework based on cross-camera generation, which integrates both temporal and spatial information.
To verify the effectiveness of our method, we construct the first cross-camera pedestrian trajectory dataset.
arXiv Detail & Related papers (2022-04-27T13:10:48Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - 2nd Place Solution for Waymo Open Dataset Challenge - Real-time 2D
Object Detection [26.086623067939605]
In this report, we introduce a real-time method to detect the 2D objects from images.
We leverage accelerationRT to optimize the inference time of our detection pipeline.
Our framework achieves the latency of 45.8ms/frame on an Nvidia Tesla V100 GPU.
arXiv Detail & Related papers (2021-06-16T11:32:03Z) - Object Tracking by Detection with Visual and Motion Cues [1.7818230914983044]
Self-driving cars need to detect and track objects in camera images.
We present a simple online tracking algorithm that is based on a constant velocity motion model with a Kalman filter.
We evaluate our approach on the challenging BDD100 dataset.
arXiv Detail & Related papers (2021-01-19T10:29:16Z) - Fast Motion Understanding with Spatiotemporal Neural Networks and
Dynamic Vision Sensors [99.94079901071163]
This paper presents a Dynamic Vision Sensor (DVS) based system for reasoning about high speed motion.
We consider the case of a robot at rest reacting to a small, fast approaching object at speeds higher than 15m/s.
We highlight the results of our system to a toy dart moving at 23.4m/s with a 24.73deg error in $theta$, 18.4mm average discretized radius prediction error, and 25.03% median time to collision prediction error.
arXiv Detail & Related papers (2020-11-18T17:55:07Z) - Fast Video Object Segmentation With Temporal Aggregation Network and
Dynamic Template Matching [67.02962970820505]
We introduce "tracking-by-detection" into Video Object (VOS)
We propose a new temporal aggregation network and a novel dynamic time-evolving template matching mechanism to achieve significantly improved performance.
We achieve new state-of-the-art performance on the DAVIS benchmark without complicated bells and whistles in both speed and accuracy, with a speed of 0.14 second per frame and J&F measure of 75.9% respectively.
arXiv Detail & Related papers (2020-07-11T05:44:16Z) - Tracking Objects as Points [83.9217787335878]
We present a simultaneous detection and tracking algorithm that is simpler, faster, and more accurate than the state of the art.
Our tracker, CenterTrack, applies a detection model to a pair of images and detections from the prior frame.
CenterTrack is simple, online (no peeking into the future), and real-time.
arXiv Detail & Related papers (2020-04-02T17:58:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.