Sequence-to-Sequence Imputation of Missing Sensor Data
- URL: http://arxiv.org/abs/2002.10767v1
- Date: Tue, 25 Feb 2020 09:51:20 GMT
- Title: Sequence-to-Sequence Imputation of Missing Sensor Data
- Authors: Joel Janek Dabrowski and Ashfaqur Rahman
- Abstract summary: We develop a sequence-to-sequence model for recovering missing sensor data.
A forward RNN encodes the data observed before the missing sequence and a backward RNN encodes the data observed after the missing sequence.
A decoder decodes the two encoders in a novel way to predict the missing data.
- Score: 1.9036571490366496
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although the sequence-to-sequence (encoder-decoder) model is considered the
state-of-the-art in deep learning sequence models, there is little research
into using this model for recovering missing sensor data. The key challenge is
that the missing sensor data problem typically comprises three sequences (a
sequence of observed samples, followed by a sequence of missing samples,
followed by another sequence of observed samples) whereas, the
sequence-to-sequence model only considers two sequences (an input sequence and
an output sequence). We address this problem by formulating a
sequence-to-sequence in a novel way. A forward RNN encodes the data observed
before the missing sequence and a backward RNN encodes the data observed after
the missing sequence. A decoder decodes the two encoders in a novel way to
predict the missing data. We demonstrate that this model produces the lowest
errors in 12% more cases than the current state-of-the-art.
Related papers
- Harnessing Attention Mechanisms: Efficient Sequence Reduction using
Attention-based Autoencoders [14.25761027376296]
We introduce a novel attention-based method that allows for the direct manipulation of sequence lengths.
We show that the autoencoder retains all the significant information when reducing the original sequence to half its original size.
arXiv Detail & Related papers (2023-10-23T11:57:44Z) - SequenceMatch: Imitation Learning for Autoregressive Sequence Modelling with Backtracking [60.109453252858806]
A maximum-likelihood (MLE) objective does not match a downstream use-case of autoregressively generating high-quality sequences.
We formulate sequence generation as an imitation learning (IL) problem.
This allows us to minimize a variety of divergences between the distribution of sequences generated by an autoregressive model and sequences from a dataset.
Our resulting method, SequenceMatch, can be implemented without adversarial training or architectural changes.
arXiv Detail & Related papers (2023-06-08T17:59:58Z) - Seq-HyGAN: Sequence Classification via Hypergraph Attention Network [0.0]
Sequence classification has a wide range of real-world applications in different domains, such as genome classification in health and anomaly detection in business.
The lack of explicit features in sequence data makes it difficult for machine learning models.
We propose a novel Hypergraph Attention Network model, namely Seq-HyGAN.
arXiv Detail & Related papers (2023-03-04T11:53:33Z) - Mutual Exclusivity Training and Primitive Augmentation to Induce
Compositionality [84.94877848357896]
Recent datasets expose the lack of the systematic generalization ability in standard sequence-to-sequence models.
We analyze this behavior of seq2seq models and identify two contributing factors: a lack of mutual exclusivity bias and the tendency to memorize whole examples.
We show substantial empirical improvements using standard sequence-to-sequence models on two widely-used compositionality datasets.
arXiv Detail & Related papers (2022-11-28T17:36:41Z) - A Non-monotonic Self-terminating Language Model [62.93465126911921]
In this paper, we focus on the problem of non-terminating sequences resulting from an incomplete decoding algorithm.
We first define an incomplete probable decoding algorithm which includes greedy search, top-$k$ sampling, and nucleus sampling.
We then propose a non-monotonic self-terminating language model, which relaxes the constraint of monotonically increasing termination probability.
arXiv Detail & Related papers (2022-10-03T00:28:44Z) - Sequence Prediction Under Missing Data : An RNN Approach Without
Imputation [1.9188864062289432]
This paper pertains to a novel Recurrent Network (RNN) based solution for sequence prediction under missing data.
It tries to encode the missingness patterns in the data directly without trying to impute data either before or during model building.
We focus on forecasting here in a general context of multi-step prediction in presence of possible inputs.
arXiv Detail & Related papers (2022-08-18T16:09:12Z) - Unsupervised Flow-Aligned Sequence-to-Sequence Learning for Video
Restoration [85.3323211054274]
How to properly model the inter-frame relation within the video sequence is an important but unsolved challenge for video restoration (VR)
In this work, we propose an unsupervised flow-aligned sequence-to-sequence model (S2SVR) to address this problem.
S2SVR shows superior performance in multiple VR tasks, including video deblurring, video super-resolution, and compressed video quality enhancement.
arXiv Detail & Related papers (2022-05-20T14:14:48Z) - Don't Take It Literally: An Edit-Invariant Sequence Loss for Text
Generation [109.46348908829697]
We propose a novel Edit-Invariant Sequence Loss (EISL), which computes the matching loss of a target n-gram with all n-grams in the generated sequence.
We conduct experiments on three tasks: machine translation with noisy target sequences, unsupervised text style transfer, and non-autoregressive machine translation.
arXiv Detail & Related papers (2021-06-29T03:59:21Z) - Multi-Scale One-Class Recurrent Neural Networks for Discrete Event
Sequence Anomaly Detection [63.825781848587376]
We propose OC4Seq, a one-class recurrent neural network for detecting anomalies in discrete event sequences.
Specifically, OC4Seq embeds the discrete event sequences into latent spaces, where anomalies can be easily detected.
arXiv Detail & Related papers (2020-08-31T04:48:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.