Modified Supervised Contrastive Learning for Detecting Anomalous Driving
Behaviours
- URL: http://arxiv.org/abs/2109.04021v1
- Date: Thu, 9 Sep 2021 03:50:19 GMT
- Title: Modified Supervised Contrastive Learning for Detecting Anomalous Driving
Behaviours
- Authors: Shehroz S. Khan, Ziting Shen, Haoying Sun, Ax Patel, and Ali Abedi
- Abstract summary: We formulate this problem as a supervised contrastive learning approach to learn a visual representation to detect normal, and seen and unseen anomalous driving behaviours.
We show our results on a Driver Anomaly Detection dataset that contains 783 minutes of video recordings of normal and anomalous driving behaviours of 31 drivers.
- Score: 1.4544109317472054
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detecting distracted driving behaviours is important to reduce millions of
deaths and injuries occurring worldwide. Distracted or anomalous driving
behaviours are deviations from the 'normal' driving that need to be identified
correctly to alert the driver. However, these driving behaviours do not
comprise of one specific type of driving style and their distribution can be
different during training and testing phases of a classifier. We formulate this
problem as a supervised contrastive learning approach to learn a visual
representation to detect normal, and seen and unseen anomalous driving
behaviours. We made a change to the standard contrastive loss function to
adjust the similarity of negative pairs to aid the optimization. Normally, the
(self) supervised contrastive framework contains an encoder followed by a
projection head, which is omitted during testing phase as the encoding layers
are considered to contain general visual representative information. However,
we assert that for supervised contrastive learning task, including projection
head will be beneficial. We showed our results on a Driver Anomaly Detection
dataset that contains 783 minutes of video recordings of normal and anomalous
driving behaviours of 31 drivers from various from top and front cameras (both
depth and infrared). We also performed an extra step of fine tuning the labels
in this dataset. Out of 9 video modalities combinations, our modified
contrastive approach improved the ROC AUC on 7 in comparison to the baseline
models (from 3.12% to 8.91% for different modalities); the remaining two models
also had manual labelling. We performed statistical tests that showed evidence
that our modifications perform better than the baseline contrastive models.
Finally, the results showed that the fusion of depth and infrared modalities
from top and front view achieved the best AUC ROC of 0.9738 and AUC PR of
0.9772.
Related papers
- Cross-Camera Distracted Driver Classification through Feature Disentanglement and Contrastive Learning [13.613407983544427]
We introduce a robust model designed to withstand changes in camera position within the vehicle.
Our Driver Behavior Monitoring Network (DBMNet) relies on a lightweight backbone and integrates a disentanglement module.
Experiments conducted on the daytime and nighttime subsets of the 100-Driver dataset validate the effectiveness of our approach.
arXiv Detail & Related papers (2024-11-20T10:27:12Z) - Predicting Overtakes in Trucks Using CAN Data [51.28632782308621]
We investigate the detection of truck overtakes from CAN data.
Our analysis covers up to 10 seconds before the overtaking event.
We observe that the prediction scores of the overtake class tend to increase as we approach the overtake trigger.
arXiv Detail & Related papers (2024-04-08T17:58:22Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Unsupervised Adaptation from Repeated Traversals for Autonomous Driving [54.59577283226982]
Self-driving cars must generalize to the end-user's environment to operate reliably.
One potential solution is to leverage unlabeled data collected from the end-users' environments.
There is no reliable signal in the target domain to supervise the adaptation process.
We show that this simple additional assumption is sufficient to obtain a potent signal that allows us to perform iterative self-training of 3D object detectors on the target domain.
arXiv Detail & Related papers (2023-03-27T15:07:55Z) - Real-Time Driver Monitoring Systems through Modality and View Analysis [28.18784311981388]
Driver distractions are known to be the dominant cause of road accidents.
State-of-the-art methods prioritize accuracy while ignoring latency.
We propose time-effective detection models by neglecting the temporal relation between video frames.
arXiv Detail & Related papers (2022-10-17T21:22:41Z) - Unsupervised Driving Behavior Analysis using Representation Learning and
Exploiting Group-based Training [15.355045011160804]
Driving behavior monitoring plays a crucial role in managing road safety and decreasing the risk of traffic accidents.
Current work performs a robust driving pattern analysis by capturing variations in driving patterns.
It forms consistent groups by learning compressed representation of time series.
arXiv Detail & Related papers (2022-05-12T10:27:47Z) - Driving Anomaly Detection Using Conditional Generative Adversarial
Network [26.45460503638333]
This study proposes an unsupervised method to quantify driving anomalies using a conditional generative adversarial network (GAN)
The approach predicts upcoming driving scenarios by conditioning the models on the previously observed signals.
The results are validated with perceptual evaluations, where annotators are asked to assess the risk and familiarity of the videos detected with high anomaly scores.
arXiv Detail & Related papers (2022-03-15T22:10:01Z) - Driver Anomaly Detection: A Dataset and Contrastive Learning Approach [17.020790792750457]
We propose a contrastive learning approach to learn a metric to differentiate normal driving from anomalous driving.
Our method reaches 0.9673 AUC on the test set, demonstrating the effectiveness of the contrastive learning approach on the anomaly detection task.
arXiv Detail & Related papers (2020-09-30T13:23:21Z) - Learning Accurate and Human-Like Driving using Semantic Maps and
Attention [152.48143666881418]
This paper investigates how end-to-end driving models can be improved to drive more accurately and human-like.
We exploit semantic and visual maps from HERE Technologies and augment the existing Drive360 dataset with such.
Our models are trained and evaluated on the Drive360 + HERE dataset, which features 60 hours and 3000 km of real-world driving data.
arXiv Detail & Related papers (2020-07-10T22:25:27Z) - Driver Intention Anticipation Based on In-Cabin and Driving Scene
Monitoring [52.557003792696484]
We present a framework for the detection of the drivers' intention based on both in-cabin and traffic scene videos.
Our framework achieves a prediction with the accuracy of 83.98% and F1-score of 84.3%.
arXiv Detail & Related papers (2020-06-20T11:56:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.