Improved YOLOv3 Object Classification in Intelligent Transportation
System
- URL: http://arxiv.org/abs/2004.03948v1
- Date: Wed, 8 Apr 2020 11:45:13 GMT
- Title: Improved YOLOv3 Object Classification in Intelligent Transportation
System
- Authors: Yang Zhang, Changhui Hu, Xiaobo Lu
- Abstract summary: An algorithm based on YOLOv3 is proposed to realize the detection and classification of vehicles, drivers, and people on the highway.
The model has a good performance and is robust to road blocking, different attitudes, and extreme lighting.
- Score: 29.002873450422083
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The technology of vehicle and driver detection in Intelligent Transportation
System(ITS) is a hot topic in recent years. In particular, the driver detection
is still a challenging problem which is conductive to supervising traffic order
and maintaining public safety. In this paper, an algorithm based on YOLOv3 is
proposed to realize the detection and classification of vehicles, drivers, and
people on the highway, so as to achieve the purpose of distinguishing driver
and passenger and form a one-to-one correspondence between vehicles and
drivers. The proposed model and contrast experiment are conducted on our
self-build traffic driver's face database. The effectiveness of our proposed
algorithm is validated by extensive experiments and verified under various
complex highway conditions. Compared with other advanced vehicle and driver
detection technologies, the model has a good performance and is robust to road
blocking, different attitudes, and extreme lighting.
Related papers
- Towards Safe Autonomy in Hybrid Traffic: Detecting Unpredictable
Abnormal Behaviors of Human Drivers via Information Sharing [21.979007506007733]
We show that our proposed algorithm has great detection performance in both highway and urban traffic.
The best performance achieves detection rate of 97.3%, average detection delay of 1.2s, and 0 false alarm.
arXiv Detail & Related papers (2023-08-23T18:24:28Z) - Detecting Socially Abnormal Highway Driving Behaviors via Recurrent
Graph Attention Networks [4.526932450666445]
This work focuses on detecting abnormal driving behaviors from trajectories produced by highway video surveillance systems.
We propose an autoencoder with a Recurrent Graph Attention Network that can capture the highway driving behaviors contextualized on the surrounding cars.
Our model is scalable to large freeways with thousands of cars.
arXiv Detail & Related papers (2023-04-23T01:32:47Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - Threat Detection In Self-Driving Vehicles Using Computer Vision [0.0]
We propose a threat detection mechanism for autonomous self-driving cars using dashcam videos.
There are four major components, namely, YOLO to identify the objects, advanced lane detection algorithm, multi regression model to measure the distance of the object from the camera.
The final accuracy of our proposed Threat Detection Model (TDM) is 82.65%.
arXiv Detail & Related papers (2022-09-06T12:01:07Z) - Vision Transformers and YoloV5 based Driver Drowsiness Detection
Framework [0.0]
This paper introduces a novel framework based on vision transformers and YoloV5 architectures for driver drowsiness recognition.
A custom YoloV5 pre-trained architecture is proposed for face extraction with the aim of extracting Region of Interest (ROI)
For the further evaluation, proposed framework is tested on a custom dataset of 39 participants in various light circumstances and achieved 95.5% accuracy.
arXiv Detail & Related papers (2022-09-03T11:37:41Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Safety-aware Motion Prediction with Unseen Vehicles for Autonomous
Driving [104.32241082170044]
We study a new task, safety-aware motion prediction with unseen vehicles for autonomous driving.
Unlike the existing trajectory prediction task for seen vehicles, we aim at predicting an occupancy map.
Our approach is the first one that can predict the existence of unseen vehicles in most cases.
arXiv Detail & Related papers (2021-09-03T13:33:33Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Mining Personalized Climate Preferences for Assistant Driving [1.6752182911522522]
We propose a novel approach for climate control, driver behavior recognition and driving recommendation for better fitting drivers' preferences in their daily driving.
A prototype using a client-server architecture with an iOS app and an air-quality monitoring sensor has been developed.
Real-world experiments on driving data of 11,370 km (320 hours) by different drivers in multiple cities worldwide have been conducted.
arXiv Detail & Related papers (2020-06-16T00:45:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.