Generalized adaptive smoothing based neural network architecture for
traffic state estimation
- URL: http://arxiv.org/abs/2301.03439v1
- Date: Mon, 9 Jan 2023 15:40:45 GMT
- Title: Generalized adaptive smoothing based neural network architecture for
traffic state estimation
- Authors: Chuhan Yang, Sai Venkata Ramana Ambadipudi and Saif Eddin Jabari
- Abstract summary: The adaptive smoothing method (ASM) is a standard data-driven technique used in traffic state estimation.
We propose a neural network based on the ASM which tunes those parameters automatically by learning from sparse data from road sensors.
Our experiments reveal that the ASNN and the MASNN outperform the conventional ASM.
- Score: 1.0312968200748118
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The adaptive smoothing method (ASM) is a standard data-driven technique used
in traffic state estimation. The ASM has free parameters which, in practice,
are chosen to be some generally acceptable values based on intuition. However,
we note that the heuristically chosen values often result in un-physical
predictions by the ASM. In this work, we propose a neural network based on the
ASM which tunes those parameters automatically by learning from sparse data
from road sensors. We refer to it as the adaptive smoothing neural network
(ASNN). We also propose a modified ASNN (MASNN), which makes it a strong
learner by using ensemble averaging. The ASNN and MASNN are trained and tested
two real-world datasets. Our experiments reveal that the ASNN and the MASNN
outperform the conventional ASM.
Related papers
- ALWNN Empowered Automatic Modulation Classification: Conquering Complexity and Scarce Sample Conditions [24.59462798452397]
This paper proposes an automatic modulation classification model based on the Adaptive Lightweight Wavelet Neural Network (ALWNN) and the few-shot framework (MALWNN)
The ALWNN model, by integrating the adaptive wavelet neural network and depth separable convolution, reduces the number of model parameters and computational complexity.
Experiments with MALWNN show its superior performance in few-shot learning scenarios compared to other algorithms.
arXiv Detail & Related papers (2025-03-24T06:14:33Z) - Accurate Mapping of RNNs on Neuromorphic Hardware with Adaptive Spiking Neurons [2.9410174624086025]
We present a $SigmaDelta$-low-pass RNN (lpRNN) for mapping rate-based RNNs to spiking neural networks (SNNs)
An adaptive spiking neuron model encodes signals using $SigmaDelta$-modulation and enables precise mapping.
We demonstrate the implementation of the lpRNN on Intel's neuromorphic research chip Loihi.
arXiv Detail & Related papers (2024-07-18T14:06:07Z) - A hybrid IndRNNLSTM approach for real-time anomaly detection in
software-defined networks [0.0]
Anomaly detection in SDN using data flow prediction is a difficult task.
IndRNNLSTM algorithm, in combination with Embedded, was able to achieve MAE=1.22 and RMSE=9.92 on NSL-KDD data.
arXiv Detail & Related papers (2024-02-02T20:41:55Z) - Text Classification in Memristor-based Spiking Neural Networks [0.0]
We develop a simulation framework with a virtual memristor array to demonstrate a sentiment analysis task in the IMDB movie reviews dataset.
We achieve the classification accuracy of 85.88% by converting a pre-trained ANN to a memristor-based SNN and 84.86% by training the memristor-based SNN directly.
We also investigate how global parameters such as spike train length, the read noise, and the weight updating stop conditions affect the neural networks in both approaches.
arXiv Detail & Related papers (2022-07-27T18:08:31Z) - Deep neural network based adaptive learning for switched systems [0.3222802562733786]
We present a deep neural network based adaptive learning (DNN-AL) approach for switched systems.
observed datasets are adaptively decomposed into subsets, such as no structural changes within each subset.
Network parameters at previous iteration steps are reused to initialize networks for the later iteration steps.
arXiv Detail & Related papers (2022-07-11T04:51:58Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Supervised Training of Siamese Spiking Neural Networks with Earth's
Mover Distance [4.047840018793636]
This study adapts the highly-versatile siamese neural network model to the event data domain.
We introduce a supervised training framework for optimizing Earth's Mover Distance between spike trains with spiking neural networks (SNN)
arXiv Detail & Related papers (2022-02-20T00:27:57Z) - Can we learn gradients by Hamiltonian Neural Networks? [68.8204255655161]
We propose a meta-learner based on ODE neural networks that learns gradients.
We demonstrate that our method outperforms a meta-learner based on LSTM for an artificial task and the MNIST dataset with ReLU activations in the optimizee.
arXiv Detail & Related papers (2021-10-31T18:35:10Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Bayesian Graph Neural Networks with Adaptive Connection Sampling [62.51689735630133]
We propose a unified framework for adaptive connection sampling in graph neural networks (GNNs)
The proposed framework not only alleviates over-smoothing and over-fitting tendencies of deep GNNs, but also enables learning with uncertainty in graph analytic tasks with GNNs.
arXiv Detail & Related papers (2020-06-07T07:06:35Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.