Non-Intrusive Driver Behavior Characterization From Road-Side Cameras
- URL: http://arxiv.org/abs/2302.13125v1
- Date: Sat, 25 Feb 2023 17:22:49 GMT
- Title: Non-Intrusive Driver Behavior Characterization From Road-Side Cameras
- Authors: Pavana Pradeep Kumar, Krishna Kant, Amitangshu Pal
- Abstract summary: We show a proof of concept for characterizing vehicular behavior using only the roadside cameras of ITS system.
We show that the driver classification based on the external video analytics yields accuracies that are within 1-2% of the accuracies of direct vehicle characterization.
- Score: 1.9659095632676098
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In this paper, we demonstrate a proof of concept for characterizing vehicular
behavior using only the roadside cameras of the ITS system. The essential
advantage of this method is that it can be implemented in the roadside
infrastructure transparently and inexpensively and can have a global view of
each vehicle's behavior without any involvement of or awareness by the
individual vehicles or drivers. By using a setup that includes programmatically
controlled robot cars (to simulate different types of vehicular behaviors) and
an external video camera set up to capture and analyze the vehicular behavior,
we show that the driver classification based on the external video analytics
yields accuracies that are within 1-2\% of the accuracies of direct
vehicle-based characterization. We also show that the residual errors primarily
relate to gaps in correct object identification and tracking and thus can be
further reduced with a more sophisticated setup. The characterization can be
used to enhance both the safety and performance of the traffic flow,
particularly in the mixed manual and automated vehicle scenarios that are
expected to be common soon.
Related papers
- BEVSeg2TP: Surround View Camera Bird's-Eye-View Based Joint Vehicle
Segmentation and Ego Vehicle Trajectory Prediction [4.328789276903559]
Trajectory prediction is a key task for vehicle autonomy.
There is a growing interest in learning-based trajectory prediction.
We show that there is the potential to improve the performance of perception.
arXiv Detail & Related papers (2023-12-20T15:02:37Z) - Anticipating Driving Behavior through Deep Learning-Based Policy
Prediction [66.344923925939]
We developed a system that processes integrated visual features derived from video frames captured by a regular camera, along with depth details obtained from a point cloud scanner.
This system is designed to anticipate driving actions, encompassing both vehicle speed and steering angle.
Our evaluations indicate that the forecasts achieve a noteworthy level of accuracy in a minimum of half the test scenarios.
arXiv Detail & Related papers (2023-07-20T17:38:55Z) - Control-Aware Prediction Objectives for Autonomous Driving [78.19515972466063]
We present control-aware prediction objectives (CAPOs) to evaluate the downstream effect of predictions on control without requiring the planner be differentiable.
We propose two types of importance weights that weight the predictive likelihood: one using an attention model between agents, and another based on control variation when exchanging predicted trajectories for ground truth trajectories.
arXiv Detail & Related papers (2022-04-28T07:37:21Z) - Audiovisual Affect Assessment and Autonomous Automobiles: Applications [0.0]
This contribution aims to foresee according challenges and provide potential avenues towards affect modelling in a multimodal "audiovisual plus x" on the road context.
From the technical end, this concerns holistic passenger modelling and reliable diarisation of the individuals in a vehicle.
In conclusion, automated affect analysis has just matured to the point of applicability in autonomous vehicles in first selected use-cases.
arXiv Detail & Related papers (2022-03-14T20:39:02Z) - Differentiable Control Barrier Functions for Vision-based End-to-End
Autonomous Driving [100.57791628642624]
We introduce a safety guaranteed learning framework for vision-based end-to-end autonomous driving.
We design a learning system equipped with differentiable control barrier functions (dCBFs) that is trained end-to-end by gradient descent.
arXiv Detail & Related papers (2022-03-04T16:14:33Z) - Automated Object Behavioral Feature Extraction for Potential Risk
Analysis based on Video Sensor [6.291501119156943]
Pedestrians are exposed to risk of death or serious injuries on roads, especially unsignalized crosswalks.
We propose an automated and simpler system for effectively extracting object behavioral features from video sensors deployed on the road.
This study demonstrates the potential for a network of connected video sensors to provide actionable data for smart cities.
arXiv Detail & Related papers (2021-07-08T01:11:31Z) - Driving-Signal Aware Full-Body Avatars [49.89791440532946]
We present a learning-based method for building driving-signal aware full-body avatars.
Our model is a conditional variational autoencoder that can be animated with incomplete driving signals.
We demonstrate the efficacy of our approach on the challenging problem of full-body animation for virtual telepresence.
arXiv Detail & Related papers (2021-05-21T16:22:38Z) - Self-Supervised Steering Angle Prediction for Vehicle Control Using
Visual Odometry [55.11913183006984]
We show how a model can be trained to control a vehicle's trajectory using camera poses estimated through visual odometry methods.
We propose a scalable framework that leverages trajectory information from several different runs using a camera setup placed at the front of a car.
arXiv Detail & Related papers (2021-03-20T16:29:01Z) - Video action recognition for lane-change classification and prediction
of surrounding vehicles [12.127050913280925]
Lane-change recognition and prediction tasks are posed as video action recognition problems.
We study the influence of context and observation horizons on performance, and different prediction horizons are analyzed.
The obtained results clearly demonstrate the potential of these methodologies to serve as robust predictors of future lane-changes of surrounding vehicles.
arXiv Detail & Related papers (2021-01-13T13:25:00Z) - Two-Stream Networks for Lane-Change Prediction of Surrounding Vehicles [8.828423067460644]
In highway scenarios, an alert human driver will typically anticipate early cut-in and cut-out maneuvers surrounding vehicles using only visual cues.
To deal with lane-change recognition and prediction of surrounding vehicles, we pose the problem as an action recognition/prediction problem by stacking visual cues from video cameras.
Two video action recognition approaches are analyzed: two-stream convolutional networks and multiplier networks.
arXiv Detail & Related papers (2020-08-25T07:59:15Z) - Driver Intention Anticipation Based on In-Cabin and Driving Scene
Monitoring [52.557003792696484]
We present a framework for the detection of the drivers' intention based on both in-cabin and traffic scene videos.
Our framework achieves a prediction with the accuracy of 83.98% and F1-score of 84.3%.
arXiv Detail & Related papers (2020-06-20T11:56:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.