Toward Extremely Lightweight Distracted Driver Recognition With
Distillation-Based Neural Architecture Search and Knowledge Transfer
- URL: http://arxiv.org/abs/2302.04527v1
- Date: Thu, 9 Feb 2023 09:39:59 GMT
- Title: Toward Extremely Lightweight Distracted Driver Recognition With
Distillation-Based Neural Architecture Search and Knowledge Transfer
- Authors: Dichao Liu, Toshihiko Yamasaki, Yu Wang, Kenji Mase, Jien Kato
- Abstract summary: Traffic accidents are caused by distracted drivers, who take their attention away from driving.
Many researchers developed CNN-based algorithms to recognize distracted driving from a dashcam.
Current models have too many parameters, which is unfeasible for vehicle-mounted computing.
This work proposes a novel knowledge-distillation-based framework to solve this problem.
- Score: 31.370334608784546
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The number of traffic accidents has been continuously increasing in recent
years worldwide. Many accidents are caused by distracted drivers, who take
their attention away from driving. Motivated by the success of Convolutional
Neural Networks (CNNs) in computer vision, many researchers developed CNN-based
algorithms to recognize distracted driving from a dashcam and warn the driver
against unsafe behaviors. However, current models have too many parameters,
which is unfeasible for vehicle-mounted computing. This work proposes a novel
knowledge-distillation-based framework to solve this problem. The proposed
framework first constructs a high-performance teacher network by progressively
strengthening the robustness to illumination changes from shallow to deep
layers of a CNN. Then, the teacher network is used to guide the architecture
searching process of a student network through knowledge distillation. After
that, we use the teacher network again to transfer knowledge to the student
network by knowledge distillation. Experimental results on the Statefarm
Distracted Driver Detection Dataset and AUC Distracted Driver Dataset show that
the proposed approach is highly effective for recognizing distracted driving
behaviors from photos: (1) the teacher network's accuracy surpasses the
previous best accuracy; (2) the student network achieves very high accuracy
with only 0.42M parameters (around 55% of the previous most lightweight model).
Furthermore, the student network architecture can be extended to a
spatial-temporal 3D CNN for recognizing distracted driving from video clips.
The 3D student network largely surpasses the previous best accuracy with only
2.03M parameters on the Drive&Act Dataset. The source code is available at
https://github.com/Dichao-Liu/Lightweight_Distracted_Driver_Recognition_with_Distillation-Based_NAS_ and_Knowledge_Transfer.
Related papers
- Blind-Spot Collision Detection System for Commercial Vehicles Using
Multi Deep CNN Architecture [0.17499351967216337]
Two convolutional neural networks (CNNs) based on high-level feature descriptors are proposed to detect blind-spot collisions for heavy vehicles.
A fusion approach is proposed to integrate two pre-trained networks for extracting high level features for blind-spot vehicle detection.
The fusion of features significantly improves the performance of faster R-CNN and outperformed the existing state-of-the-art methods.
arXiv Detail & Related papers (2022-08-17T11:10:37Z) - Paint and Distill: Boosting 3D Object Detection with Semantic Passing
Network [70.53093934205057]
3D object detection task from lidar or camera sensors is essential for autonomous driving.
We propose a novel semantic passing framework, named SPNet, to boost the performance of existing lidar-based 3D detection models.
arXiv Detail & Related papers (2022-07-12T12:35:34Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - itKD: Interchange Transfer-based Knowledge Distillation for 3D Object
Detection [3.735965959270874]
We propose an autoencoder-style framework comprising channel-wise compression and decompression.
To learn the map-view feature of a teacher network, the features from teacher and student networks are independently passed through the shared autoencoder.
We present an head attention loss to match the 3D object detection information drawn by the multi-head self-attention mechanism.
arXiv Detail & Related papers (2022-05-31T04:25:37Z) - Learnable Online Graph Representations for 3D Multi-Object Tracking [156.58876381318402]
We propose a unified and learning based approach to the 3D MOT problem.
We employ a Neural Message Passing network for data association that is fully trainable.
We show the merit of the proposed approach on the publicly available nuScenes dataset by achieving state-of-the-art performance of 65.6% AMOTA and 58% fewer ID-switches.
arXiv Detail & Related papers (2021-04-23T17:59:28Z) - Driving Style Representation in Convolutional Recurrent Neural Network
Model of Driver Identification [8.007800530105191]
We present a deep-neural-network architecture, we term D-CRNN, for building high-fidelity representations for driving style.
Using CNN, we capture semantic patterns of driver behavior from trajectories.
We then find temporal dependencies between these semantic patterns using RNN to encode driving style.
arXiv Detail & Related papers (2021-02-11T04:33:43Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Drive-Net: Convolutional Network for Driver Distraction Detection [2.485182034310304]
We present an automated supervised learning method called Drive-Net for driver distraction detection.
Drive-Net uses a combination of a convolutional neural network (CNN) and a random decision forest for classifying images of a driver.
Results show that Drive-Net achieves a detection accuracy of 95%, which is 2% more than the best results obtained on the same database using other methods.
arXiv Detail & Related papers (2020-06-22T19:54:53Z) - Auto-Rectify Network for Unsupervised Indoor Depth Estimation [119.82412041164372]
We establish that the complex ego-motions exhibited in handheld settings are a critical obstacle for learning depth.
We propose a data pre-processing method that rectifies training images by removing their relative rotations for effective learning.
Our results outperform the previous unsupervised SOTA method by a large margin on the challenging NYUv2 dataset.
arXiv Detail & Related papers (2020-06-04T08:59:17Z) - DriftNet: Aggressive Driving Behavior Classification using 3D
EfficientNet Architecture [1.8734449181723827]
Aggressive driving (i.e., car drifting) is a dangerous behavior that puts human safety and life into a significant risk.
Recent techniques in deep learning proposed new approaches for anomaly detection in different contexts.
In this paper, we propose a new anomaly detection framework applied to the detection of aggressive driving behavior.
arXiv Detail & Related papers (2020-04-18T08:36:04Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.