Asymmetrical Bi-RNN for pedestrian trajectory encoding
- URL: http://arxiv.org/abs/2106.04419v1
- Date: Tue, 1 Jun 2021 12:05:15 GMT
- Title: Asymmetrical Bi-RNN for pedestrian trajectory encoding
- Authors: Rapha\"el Rozenberg, Joseph Gesnouin and Fabien Moutarde
- Abstract summary: We present a non-symmetrical bidirectional recurrent neural network architecture called U-RNN as a sequence encoder.
Experimental results on the Trajnet++ benchmark show that the U-LSTM variant can yield better results regarding every available metric.
- Score: 2.1485350418225244
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Pedestrian motion behavior involves a combination of individual goals and
social interactions with other agents. In this article, we present a
non-symmetrical bidirectional recurrent neural network architecture called
U-RNN as a sequence encoder and evaluate its relevance to replace LSTMs for
various forecasting models. Experimental results on the Trajnet++ benchmark
show that the U-LSTM variant can yield better results regarding every available
metric (ADE, FDE, Collision rate) than common LSTMs sequence encoders for a
variety of approaches and interaction modules.
Our implementation of the asymmetrical Bi-RNNs for the Trajnet++ benchmark is
available at:
github.com/JosephGesnouin/Asymmetrical-Bi-RNNs-to-encode-pedestrian-trajectories
Related papers
- GNN-Suite: a Graph Neural Network Benchmarking Framework for Biomedical Informatics [0.0]
We present GNN-Suite, a framework for constructing and benchmarking Graph Neural Network (GNN) architectures in computational biology.<n>We demonstrate its utility in identifying cancer-driver genes by constructing molecular networks from protein-protein interaction (PPI) data.<n>Our results show that a common framework for implementing and evaluating GNN architectures aids in identifying not only the best model but also the most effective means of incorporating complementary data.
arXiv Detail & Related papers (2025-05-15T21:14:30Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Simple GNNs with Low Rank Non-parametric Aggregators [12.108529628556944]
State-of-the-art (SOTA) GNN architectures may be over-engineered for common SSNC benchmark datasets.
By replacing feature aggregation with a non-parametric learner we are able to streamline the GNN design process.
arXiv Detail & Related papers (2023-10-08T17:56:30Z) - Set-based Neural Network Encoding Without Weight Tying [91.37161634310819]
We propose a neural network weight encoding method for network property prediction.<n>Our approach is capable of encoding neural networks in a model zoo of mixed architecture.<n>We introduce two new tasks for neural network property prediction: cross-dataset and cross-architecture.
arXiv Detail & Related papers (2023-05-26T04:34:28Z) - Bayesian Neural Network Language Modeling for Speech Recognition [59.681758762712754]
State-of-the-art neural network language models (NNLMs) represented by long short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming highly complex.
In this paper, an overarching full Bayesian learning framework is proposed to account for the underlying uncertainty in LSTM-RNN and Transformer LMs.
arXiv Detail & Related papers (2022-08-28T17:50:19Z) - VQ-T: RNN Transducers using Vector-Quantized Prediction Network States [52.48566999668521]
We propose to use vector-quantized long short-term memory units in the prediction network of RNN transducers.
By training the discrete representation jointly with the ASR network, hypotheses can be actively merged for lattice generation.
Our experiments on the Switchboard corpus show that the proposed VQ RNN transducers improve ASR performance over transducers with regular prediction networks.
arXiv Detail & Related papers (2022-08-03T02:45:52Z) - Joint Spatial-Temporal and Appearance Modeling with Transformer for
Multiple Object Tracking [59.79252390626194]
We propose a novel solution named TransSTAM, which leverages Transformer to model both the appearance features of each object and the spatial-temporal relationships among objects.
The proposed method is evaluated on multiple public benchmarks including MOT16, MOT17, and MOT20, and it achieves a clear performance improvement in both IDF1 and HOTA.
arXiv Detail & Related papers (2022-05-31T01:19:18Z) - Lattice gauge symmetry in neural networks [0.0]
We review a novel neural network architecture called lattice gauge equivariant convolutional neural networks (L-CNNs)
We discuss the concept of gauge equivariance which we use to explicitly construct a gauge equivariant convolutional layer and a bilinear layer.
The performance of L-CNNs and non-equivariant CNNs is compared using seemingly simple non-linear regression tasks.
arXiv Detail & Related papers (2021-11-08T11:20:11Z) - A Driving Behavior Recognition Model with Bi-LSTM and Multi-Scale CNN [59.57221522897815]
We propose a neural network model based on trajectories information for driving behavior recognition.
We evaluate the proposed model on the public BLVD dataset, achieving a satisfying performance.
arXiv Detail & Related papers (2021-03-01T06:47:29Z) - Automatic Remaining Useful Life Estimation Framework with Embedded
Convolutional LSTM as the Backbone [5.927250637620123]
We propose a new LSTM variant called embedded convolutional LSTM (E NeuralTM)
In ETM a group of different 1D convolutions is embedded into the LSTM structure. Through this, the temporal information is preserved between and within windows.
We show the superiority of our proposed ETM approach over the state-of-the-art approaches on several widely used benchmark data sets for RUL Estimation.
arXiv Detail & Related papers (2020-08-10T08:34:20Z) - Stacked Bidirectional and Unidirectional LSTM Recurrent Neural Network
for Forecasting Network-wide Traffic State with Missing Values [23.504633202965376]
We focus on RNN-based models and attempt to reformulate the way to incorporate RNN and its variants into traffic prediction models.
A stacked bidirectional and unidirectional LSTM network architecture (SBU-LSTM) is proposed to assist the design of neural network structures for traffic state forecasting.
We also propose a data imputation mechanism in the LSTM structure (LSTM-I) by designing an imputation unit to infer missing values and assist traffic prediction.
arXiv Detail & Related papers (2020-05-24T00:17:15Z) - Locality Sensitive Hashing-based Sequence Alignment Using Deep
Bidirectional LSTM Models [0.0]
Bidirectional Long Short-Term Memory (LSTM) is a special kind of Recurrent Neural Network (RNN) architecture.
This paper proposes to use deep bidirectional LSTM for sequence modeling as an approach to perform locality-sensitive hashing (LSH)-based sequence alignment.
arXiv Detail & Related papers (2020-04-05T05:13:06Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.