Designing deep neural networks for driver intention recognition
- URL: http://arxiv.org/abs/2402.05150v1
- Date: Wed, 7 Feb 2024 12:54:15 GMT
- Title: Designing deep neural networks for driver intention recognition
- Authors: Koen Vellenga, H. Joe Steinhauer, Alexander Karlsson, G\"oran Falkman,
Asli Rhodin and Ashok Koppisetty
- Abstract summary: This paper applies neural architecture search to investigate the effects of the deep neural network architecture on a real-world safety critical application.
A set of eight search strategies are evaluated for two driver intention recognition datasets.
- Score: 40.87622566719826
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Driver intention recognition studies increasingly rely on deep neural
networks. Deep neural networks have achieved top performance for many different
tasks, but it is not a common practice to explicitly analyse the complexity and
performance of the network's architecture. Therefore, this paper applies neural
architecture search to investigate the effects of the deep neural network
architecture on a real-world safety critical application with limited
computational capabilities. We explore a pre-defined search space for three
deep neural network layer types that are capable to handle sequential data (a
long-short term memory, temporal convolution, and a time-series transformer
layer), and the influence of different data fusion strategies on the driver
intention recognition performance. A set of eight search strategies are
evaluated for two driver intention recognition datasets. For the two datasets,
we observed that there is no search strategy clearly sampling better deep
neural network architectures. However, performing an architecture search does
improve the model performance compared to the original manually designed
networks. Furthermore, we observe no relation between increased model
complexity and higher driver intention recognition performance. The result
indicate that multiple architectures yield similar performance, regardless of
the deep neural network layer type or fusion strategy.
Related papers
- EM-DARTS: Hierarchical Differentiable Architecture Search for Eye Movement Recognition [54.99121380536659]
Eye movement biometrics have received increasing attention thanks to its high secure identification.
Deep learning (DL) models have been recently successfully applied for eye movement recognition.
DL architecture still is determined by human prior knowledge.
We propose EM-DARTS, a hierarchical differentiable architecture search algorithm to automatically design the DL architecture for eye movement recognition.
arXiv Detail & Related papers (2024-09-22T13:11:08Z) - Efficient and Accurate Hyperspectral Image Demosaicing with Neural Network Architectures [3.386560551295746]
This study investigates the effectiveness of neural network architectures in hyperspectral image demosaicing.
We introduce a range of network models and modifications, and compare them with classical methods and existing reference network approaches.
Results indicate that our networks outperform or match reference models in both datasets demonstrating exceptional performance.
arXiv Detail & Related papers (2023-12-21T08:02:49Z) - Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural
Networks [49.808194368781095]
We show that three-layer neural networks have provably richer feature learning capabilities than two-layer networks.
This work makes progress towards understanding the provable benefit of three-layer neural networks over two-layer networks in the feature learning regime.
arXiv Detail & Related papers (2023-05-11T17:19:30Z) - Convolution, aggregation and attention based deep neural networks for
accelerating simulations in mechanics [1.0154623955833253]
We demonstrate three types of neural network architectures for efficient learning of deformations of solid bodies.
The first two are based on the recently proposed CNN U-NET and MAgNET frameworks which have shown promising performance for learning on mesh-based data.
The third architecture is Perceiver IO, a very recent architecture that belongs to the family of attention-based neural networks.
arXiv Detail & Related papers (2022-12-01T13:10:56Z) - Max and Coincidence Neurons in Neural Networks [0.07614628596146598]
We optimize networks containing models of the max and coincidence neurons using neural architecture search.
We analyze the structure, operations, and neurons of optimized networks to develop a signal-processing ResNet.
The developed network achieves an average of 2% improvement in accuracy and a 25% improvement in network size across a variety of datasets.
arXiv Detail & Related papers (2021-10-04T07:13:50Z) - Efficient Neural Architecture Search with Performance Prediction [0.0]
We use a neural architecture search to find the best network architecture for the task at hand.
Existing NAS algorithms generally evaluate the fitness of a new architecture by fully training from scratch.
An end-to-end offline performance predictor is proposed to accelerate the evaluation of sampled architectures.
arXiv Detail & Related papers (2021-08-04T05:44:16Z) - Firefly Neural Architecture Descent: a General Approach for Growing
Neural Networks [50.684661759340145]
Firefly neural architecture descent is a general framework for progressively and dynamically growing neural networks.
We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures.
In particular, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
arXiv Detail & Related papers (2021-02-17T04:47:18Z) - NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural
Network Synthesis [53.106414896248246]
We present a framework that allows analysts to effectively build the solution sub-graph space and guide the network search by injecting their domain knowledge.
Applying this technique in an iterative manner allows analysts to converge to the best performing neural network architecture for a given application.
arXiv Detail & Related papers (2020-09-28T01:48:45Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z) - Inferring Convolutional Neural Networks' accuracies from their
architectural characterizations [0.0]
We study the relationships between a CNN's architecture and its performance.
We show that the attributes can be predictive of the networks' performance in two specific computer vision-based physics problems.
We use machine learning models to predict whether a network can perform better than a certain threshold accuracy before training.
arXiv Detail & Related papers (2020-01-07T16:41:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.