Explicit Context Integrated Recurrent Neural Network for Sensor Data
Applications
- URL: http://arxiv.org/abs/2301.05031v1
- Date: Thu, 12 Jan 2023 13:58:56 GMT
- Title: Explicit Context Integrated Recurrent Neural Network for Sensor Data
Applications
- Authors: Rashmi Dutta Baruah and Mario Mu\~noz Organero
- Abstract summary: Context Integrated RNN (CiRNN) enables integrating explicit contexts represented in the form of contextual features.
Experiments show an improvement of 39% and 87% respectively, over state-of-the-art models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The development and progress in sensor, communication and computing
technologies have led to data rich environments. In such environments, data can
easily be acquired not only from the monitored entities but also from the
surroundings where the entity is operating. The additional data that are
available from the problem domain, which cannot be used independently for
learning models, constitute context. Such context, if taken into account while
learning, can potentially improve the performance of predictive models.
Typically, the data from various sensors are present in the form of time
series. Recurrent Neural Networks (RNNs) are preferred for such data as it can
inherently handle temporal context. However, the conventional RNN models such
as Elman RNN, Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) in
their present form do not provide any mechanism to integrate explicit contexts.
In this paper, we propose a Context Integrated RNN (CiRNN) that enables
integrating explicit contexts represented in the form of contextual features.
In CiRNN, the network weights are influenced by contextual features in such a
way that the primary input features which are more relevant to a given context
are given more importance. To show the efficacy of CiRNN, we selected an
application domain, engine health prognostics, which captures data from various
sensors and where contextual information is available. We used the NASA
Turbofan Engine Degradation Simulation dataset for estimating Remaining Useful
Life (RUL) as it provides contextual information. We compared CiRNN with
baseline models as well as the state-of-the-art methods. The experimental
results show an improvement of 39% and 87% respectively, over state-of-the art
models, when performance is measured with RMSE and score from an asymmetric
scoring function. The latter measure is specific to the task of RUL estimation.
Related papers
- Enhancing SNN-based Spatio-Temporal Learning: A Benchmark Dataset and Cross-Modality Attention Model [30.66645039322337]
High-quality benchmark datasets are great importance to the advances of Artificial Neural Networks (SNNs)
Yet, the SNN-based cross-modal fusion remains underexplored.
In this work, we present a neuromorphic dataset that can better exploit the inherent-temporal betemporal of SNNs.
arXiv Detail & Related papers (2024-10-21T06:59:04Z) - Synthetic Trajectory Generation Through Convolutional Neural Networks [6.717469146587211]
We introduce a Reversible Trajectory-to-CNN Transformation (RTCT)
RTCT adapts trajectories into a format suitable for CNN-based models.
We evaluate its performance against an RNN-based trajectory GAN.
arXiv Detail & Related papers (2024-07-24T02:16:52Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - Batch-Ensemble Stochastic Neural Networks for Out-of-Distribution
Detection [55.028065567756066]
Out-of-distribution (OOD) detection has recently received much attention from the machine learning community due to its importance in deploying machine learning models in real-world applications.
In this paper we propose an uncertainty quantification approach by modelling the distribution of features.
We incorporate an efficient ensemble mechanism, namely batch-ensemble, to construct the batch-ensemble neural networks (BE-SNNs) and overcome the feature collapse problem.
We show that BE-SNNs yield superior performance on several OOD benchmarks, such as the Two-Moons dataset, the FashionMNIST vs MNIST dataset, FashionM
arXiv Detail & Related papers (2022-06-26T16:00:22Z) - ARM-Net: Adaptive Relation Modeling Network for Structured Data [29.94433633729326]
ARM-Net is an adaptive relation modeling network tailored for structured data and a lightweight framework ARMOR based on ARM-Net for relational data.
We show that ARM-Net consistently outperforms existing models and provides more interpretable predictions for datasets.
arXiv Detail & Related papers (2021-07-05T07:37:24Z) - Scene Understanding for Autonomous Driving [0.0]
We study the behaviour of different configurations of RetinaNet, Faster R-CNN and Mask R-CNN presented in Detectron2.
We observe a significant improvement in performance after fine-tuning these models on the datasets of interest.
We run inference in unusual situations using out of context datasets, and present interesting results.
arXiv Detail & Related papers (2021-05-11T09:50:05Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Deep Time Delay Neural Network for Speech Enhancement with Full Data
Learning [60.20150317299749]
This paper proposes a deep time delay neural network (TDNN) for speech enhancement with full data learning.
To make full use of the training data, we propose a full data learning method for speech enhancement.
arXiv Detail & Related papers (2020-11-11T06:32:37Z) - Learning from Context or Names? An Empirical Study on Neural Relation
Extraction [112.06614505580501]
We study the effect of two main information sources in text: textual context and entity mentions (names)
We propose an entity-masked contrastive pre-training framework for relation extraction (RE)
Our framework can improve the effectiveness and robustness of neural models in different RE scenarios.
arXiv Detail & Related papers (2020-10-05T11:21:59Z) - Graph Neural Networks for Leveraging Industrial Equipment Structure: An
application to Remaining Useful Life Estimation [21.297461316329453]
We propose to capture the structure of a complex equipment in the form of a graph, and use graph neural networks (GNNs) to model multi-sensor time-series data.
We observe that the proposed GNN-based RUL estimation model compares favorably to several strong baselines from literature such as those based on RNNs and CNNs.
arXiv Detail & Related papers (2020-06-30T06:38:08Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.