Akkumula: Evidence accumulation driver models with Spiking Neural Networks
- URL: http://arxiv.org/abs/2505.05489v1
- Date: Wed, 30 Apr 2025 10:03:11 GMT
- Title: Akkumula: Evidence accumulation driver models with Spiking Neural Networks
- Authors: Alberto Morando,
- Abstract summary: This paper introduces Akkumula, an evidence accumulation modelling framework built using deep learning techniques.<n>The core of the library is based on Spiking Neural Networks, whose operation mimic the evidence accumulation process in the biological brain.<n>The model fits well the time course of vehicle control based on vehicle sensor data.
- Score: 1.6988773509268102
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Processes of evidence accumulation for motor control contribute to the ecological validity of driver models. According to established theories of cognition, drivers make control adjustments when a process of accumulation of perceptual inputs reaches a decision boundary. Unfortunately, there is not a standard way for building such models, limiting their use. Current implementations are hand-crafted, lack adaptability, and rely on inefficient optimization techniques that do not scale well with large datasets. This paper introduces Akkumula, an evidence accumulation modelling framework built using deep learning techniques to leverage established coding libraries, gradient optimization, and large batch training. The core of the library is based on Spiking Neural Networks, whose operation mimic the evidence accumulation process in the biological brain. The model was tested on data collected during a test-track experiment. Results are promising. The model fits well the time course of vehicle control (brake, accelerate, steering) based on vehicle sensor data. The perceptual inputs are extracted by a dedicated neural network, increasing the context-awareness of the model in dynamic scenarios. Akkumula integrates with existing machine learning architectures, benefits from continuous advancements in deep learning, efficiently processes large datasets, adapts to diverse driving scenarios, and maintains a degree of transparency in its core mechanisms.
Related papers
- Action Flow Matching for Continual Robot Learning [57.698553219660376]
Continual learning in robotics seeks systems that can constantly adapt to changing environments and tasks.<n>We introduce a generative framework leveraging flow matching for online robot dynamics model alignment.<n>We find that by transforming the actions themselves rather than exploring with a misaligned model, the robot collects informative data more efficiently.
arXiv Detail & Related papers (2025-04-25T16:26:15Z) - Continual Learning for Adaptable Car-Following in Dynamic Traffic Environments [16.587883982785]
The continual evolution of autonomous driving technology requires car-following models that can adapt to diverse and dynamic traffic environments.
Traditional learning-based models often suffer from performance degradation when encountering unseen traffic patterns due to a lack of continual learning capabilities.
This paper proposes a novel car-following model based on continual learning that addresses this limitation.
arXiv Detail & Related papers (2024-07-17T06:32:52Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Quantized Distillation: Optimizing Driver Activity Recognition Models
for Resource-Constrained Environments [34.80538284957094]
This paper introduces a lightweight framework for resource-efficient driver activity recognition.
The framework enhances 3D MobileNet, a neural architecture optimized for speed in video classification.
It achieves a threefold reduction in model size and a 1.4-fold improvement in inference time.
arXiv Detail & Related papers (2023-11-10T10:07:07Z) - Learning An Active Inference Model of Driver Perception and Control: Application to Vehicle Car-Following [9.837204436270811]
We introduce a general estimation methodology for learning a model of human perception and control in a sensorimotor control task.<n>We consider a model's structure specification consistent with active inference, a theory of human perception and behavior from cognitive science.
arXiv Detail & Related papers (2023-03-27T13:39:26Z) - Continual learning autoencoder training for a particle-in-cell
simulation via streaming [52.77024349608834]
upcoming exascale era will provide a new generation of physics simulations with high resolution.
These simulations will have a high resolution, which will impact the training of machine learning models since storing a high amount of simulation data on disk is nearly impossible.
This work presents an approach that trains a neural network concurrently to a running simulation without data on a disk.
arXiv Detail & Related papers (2022-11-09T09:55:14Z) - Data Models for Dataset Drift Controls in Machine Learning With Optical
Images [8.818468649062932]
A primary failure mode are performance drops due to differences between the training and deployment data.
Existing approaches do not account for explicit models of the primary object of interest: the data.
We demonstrate how such data models can be constructed for image data and used to control downstream machine learning model performance related to dataset drift.
arXiv Detail & Related papers (2022-11-04T16:50:10Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Advancing Reacting Flow Simulations with Data-Driven Models [50.9598607067535]
Key to effective use of machine learning tools in multi-physics problems is to couple them to physical and computer models.
The present chapter reviews some of the open opportunities for the application of data-driven reduced-order modeling of combustion systems.
arXiv Detail & Related papers (2022-09-05T16:48:34Z) - SpikiLi: A Spiking Simulation of LiDAR based Real-time Object Detection
for Autonomous Driving [0.0]
Spiking Neural Networks are a new neural network design approach that promises tremendous improvements in power efficiency, computation efficiency, and processing latency.
We first illustrate the applicability of spiking neural networks to a complex deep learning task namely Lidar based 3D object detection for automated driving.
arXiv Detail & Related papers (2022-06-06T20:05:17Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - STAR: Sparse Transformer-based Action Recognition [61.490243467748314]
This work proposes a novel skeleton-based human action recognition model with sparse attention on the spatial dimension and segmented linear attention on the temporal dimension of data.
Experiments show that our model can achieve comparable performance while utilizing much less trainable parameters and achieve high speed in training and inference.
arXiv Detail & Related papers (2021-07-15T02:53:11Z) - Iterative Semi-parametric Dynamics Model Learning For Autonomous Racing [2.40966076588569]
We develop and apply an iterative learning semi-parametric model, with a neural network, to the task of autonomous racing.
We show that our model can learn more accurately than a purely parametric model and generalize better than a purely non-parametric model.
arXiv Detail & Related papers (2020-11-17T16:24:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.