Video-based Smoky Vehicle Detection with A Coarse-to-Fine Framework
- URL: http://arxiv.org/abs/2207.03708v1
- Date: Fri, 8 Jul 2022 06:42:45 GMT
- Title: Video-based Smoky Vehicle Detection with A Coarse-to-Fine Framework
- Authors: Xiaojiang Peng, Xiaomao Fan, Qingyang Wu, Jieyan Zhao, Pan Gao
- Abstract summary: We introduce a real-world large-scale smoky vehicle dataset with 75,000 annotated smoky vehicle images.
We also build a smoky vehicle video dataset including 163 long videos with segment-level annotations.
We present a new Coarse-to-fine Deep Smoky vehicle detection framework for efficient smoky vehicle detection.
- Score: 20.74110691914317
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic smoky vehicle detection in videos is a superior solution to the
traditional expensive remote sensing one with ultraviolet-infrared light
devices for environmental protection agencies. However, it is challenging to
distinguish vehicle smoke from shadow and wet regions coming from rear vehicle
or clutter roads, and could be worse due to limited annotated data. In this
paper, we first introduce a real-world large-scale smoky vehicle dataset with
75,000 annotated smoky vehicle images, facilitating the effective training of
advanced deep learning models. To enable fair algorithm comparison, we also
build a smoky vehicle video dataset including 163 long videos with
segment-level annotations. Moreover, we present a new Coarse-to-fine Deep Smoky
vehicle detection (CoDeS) framework for efficient smoky vehicle detection. The
CoDeS first leverages a light-weight YOLO detector for fast smoke detection
with high recall rate, and then applies a smoke-vehicle matching strategy to
eliminate non-vehicle smoke, and finally uses a elaborately-designed 3D model
to further refine the results in spatial temporal space. Extensive experiments
in four metrics demonstrate that our framework is significantly superior to
those hand-crafted feature based methods and recent advanced methods. The code
and dataset will be released at https://github.com/pengxj/smokyvehicle.
Related papers
- Detecting Wildfires on UAVs with Real-time Segmentation Trained by Larger Teacher Models [0.0]
Early detection of wildfires is essential to prevent large-scale fires resulting in extensive environmental, structural, and societal damage.
Uncrewed aerial vehicles (UAVs) can cover large remote areas effectively with quick deployment requiring minimal infrastructure.
In remote areas, however, the UAVs are limited to on-board computing for detection due to the lack of high-bandwidth mobile networks.
This study shows how small specialised segmentation models can be trained using only bounding box labels.
arXiv Detail & Related papers (2024-08-19T11:42:54Z) - EVD4UAV: An Altitude-Sensitive Benchmark to Evade Vehicle Detection in UAV [19.07281015014683]
Vehicle detection in Unmanned Aerial Vehicle (UAV) captured images has wide applications in aerial photography and remote sensing.
Recent studies show that adding an adversarial patch on objects can fool the well-trained deep neural networks based object detectors.
We propose a new dataset named EVD4UAV as an altitude-sensitive benchmark to evade vehicle detection in UAV.
arXiv Detail & Related papers (2024-03-08T16:19:39Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Obscured Wildfire Flame Detection By Temporal Analysis of Smoke Patterns
Captured by Unmanned Aerial Systems [0.799536002595393]
This research paper addresses the challenge of detecting obscured wildfires in real-time using drones equipped only with RGB cameras.
We propose a novel methodology that employs semantic segmentation based on the temporal analysis of smoke patterns in video sequences.
arXiv Detail & Related papers (2023-06-30T19:45:43Z) - TAU: A Framework for Video-Based Traffic Analytics Leveraging Artificial
Intelligence and Unmanned Aerial Systems [2.748428882236308]
We develop an AI-integrated video analytics framework, called TAU (Traffic Analysis from UAVs), for automated traffic analytics and understanding.
Unlike previous works on traffic video analytics, we propose an automated object detection and tracking pipeline from video processing to advanced traffic understanding using high-resolution UAV images.
arXiv Detail & Related papers (2023-03-01T09:03:44Z) - Image-Based Fire Detection in Industrial Environments with YOLOv4 [53.180678723280145]
This work looks into the potential of AI to detect and recognize fires and reduce detection time using object detection on an image stream.
To our end, we collected and labeled appropriate data from several public sources, which have been used to train and evaluate several models based on the popular YOLOv4 object detector.
arXiv Detail & Related papers (2022-12-09T11:32:36Z) - Deep Vehicle Detection in Satellite Video [0.0]
Vehicle detection is perhaps impossible in single satellite images due to the tininess of vehicles (4 pixel) and their similarity to the background EO.
A new model of a compact $3 times 3$ neural network is proposed which neglects pooling layers and uses leaky ReLUs.
Empirical results on two new annotated satellite videos reconfirm the applicability of this approach for vehicle detection.
arXiv Detail & Related papers (2022-04-14T08:54:44Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - SODA10M: Towards Large-Scale Object Detection Benchmark for Autonomous
Driving [94.11868795445798]
We release a Large-Scale Object Detection benchmark for Autonomous driving, named as SODA10M, containing 10 million unlabeled images and 20K images labeled with 6 representative object categories.
To improve diversity, the images are collected every ten seconds per frame within 32 different cities under different weather conditions, periods and location scenes.
We provide extensive experiments and deep analyses of existing supervised state-of-the-art detection models, popular self-supervised and semi-supervised approaches, and some insights about how to develop future models.
arXiv Detail & Related papers (2021-06-21T13:55:57Z) - Drone-based RGB-Infrared Cross-Modality Vehicle Detection via
Uncertainty-Aware Learning [59.19469551774703]
Drone-based vehicle detection aims at finding the vehicle locations and categories in an aerial image.
We construct a large-scale drone-based RGB-Infrared vehicle detection dataset, termed DroneVehicle.
Our DroneVehicle collects 28, 439 RGB-Infrared image pairs, covering urban roads, residential areas, parking lots, and other scenarios from day to night.
arXiv Detail & Related papers (2020-03-05T05:29:44Z) - Road Curb Detection and Localization with Monocular Forward-view Vehicle
Camera [74.45649274085447]
We propose a robust method for estimating road curb 3D parameters using a calibrated monocular camera equipped with a fisheye lens.
Our approach is able to estimate the vehicle to curb distance in real time with mean accuracy of more than 90%.
arXiv Detail & Related papers (2020-02-28T00:24:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.