Handling Missing Observations with an RNN-based Prediction-Update Cycle
- URL: http://arxiv.org/abs/2103.11747v1
- Date: Mon, 22 Mar 2021 11:55:10 GMT
- Title: Handling Missing Observations with an RNN-based Prediction-Update Cycle
- Authors: Stefan Becker, Ronny Hug, Wolfgang H\"ubner, Michael Arens, and
Brendan T. Morris
- Abstract summary: In tasks such as tracking, time-series data inevitably carry missing observations.
This paper introduces an RNN-based approach that provides a full temporal filtering cycle for motion state estimation.
- Score: 10.478312054103975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In tasks such as tracking, time-series data inevitably carry missing
observations. While traditional tracking approaches can handle missing
observations, recurrent neural networks (RNNs) are designed to receive input
data in every step. Furthermore, current solutions for RNNs, like omitting the
missing data or data imputation, are not sufficient to account for the
resulting increased uncertainty. Towards this end, this paper introduces an
RNN-based approach that provides a full temporal filtering cycle for motion
state estimation. The Kalman filter inspired approach, enables to deal with
missing observations and outliers. For providing a full temporal filtering
cycle, a basic RNN is extended to take observations and the associated belief
about its accuracy into account for updating the current state. An RNN
prediction model, which generates a parametrized distribution to capture the
predicted states, is combined with an RNN update model, which relies on the
prediction model output and the current observation. By providing the model
with masking information, binary-encoded missing events, the model can overcome
limitations of standard techniques for dealing with missing input values. The
model abilities are demonstrated on synthetic data reflecting prototypical
pedestrian tracking scenarios.
Related papers
- Neural Differential Recurrent Neural Network with Adaptive Time Steps [11.999568208578799]
We propose an RNN-based model, called RNN-ODE-Adap, that uses a neural ODE to represent the time development of the hidden states.
We adaptively select time steps based on the steepness of changes of the data over time so as to train the model more efficiently for the "spike-like" time series.
arXiv Detail & Related papers (2023-06-02T16:46:47Z) - Uncovering the Missing Pattern: Unified Framework Towards Trajectory
Imputation and Prediction [60.60223171143206]
Trajectory prediction is a crucial undertaking in understanding entity movement or human behavior from observed sequences.
Current methods often assume that the observed sequences are complete while ignoring the potential for missing values.
This paper presents a unified framework, the Graph-based Conditional Variational Recurrent Neural Network (GC-VRNN), which can perform trajectory imputation and prediction simultaneously.
arXiv Detail & Related papers (2023-03-28T14:27:27Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - Integrating Recurrent Neural Networks with Data Assimilation for
Scalable Data-Driven State Estimation [0.0]
Data assimilation (DA) is integrated with machine learning to perform entirely data-driven online state estimation.
recurrent neural networks (RNNs) are implemented as surrogate models to replace key components of the DA cycle in numerical weather prediction (NWP)
It is shown how these RNNs can be using DA methods to directly update the hidden/reservoir state with observations of the target system.
arXiv Detail & Related papers (2021-09-25T03:56:53Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z) - Stochastic Graph Neural Networks [123.39024384275054]
Graph neural networks (GNNs) model nonlinear representations in graph data with applications in distributed agent coordination, control, and planning.
Current GNN architectures assume ideal scenarios and ignore link fluctuations that occur due to environment, human factors, or external attacks.
In these situations, the GNN fails to address its distributed task if the topological randomness is not considered accordingly.
arXiv Detail & Related papers (2020-06-04T08:00:00Z) - On Error Correction Neural Networks for Economic Forecasting [0.0]
A class of RNNs called Error Correction Neural Networks (ECNNs) was designed to compensate for missing input variables.
It does this by feeding back in the current step the error made in the previous step.
The ECNN is implemented in Python by the computation of the appropriate gradients and it is tested on stock market predictions.
arXiv Detail & Related papers (2020-04-11T01:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.