Indoor environment data time-series reconstruction using autoencoder
neural networks
- URL: http://arxiv.org/abs/2009.08155v2
- Date: Thu, 21 Jan 2021 09:36:37 GMT
- Title: Indoor environment data time-series reconstruction using autoencoder
neural networks
- Authors: Antonio Liguori, Romana Markovic, Thi Thu Ha Dam, J\'er\^ome Frisch,
Christoph van Treeck, Francesco Causone
- Abstract summary: Building data sets are often characterized by errors and missing values.
Three different autoencoder neural networks are trained to reconstruct missing short-term indoor environment data time-series.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the number of installed meters in buildings increases, there is a growing
number of data time-series that could be used to develop data-driven models to
support and optimize building operation. However, building data sets are often
characterized by errors and missing values, which are considered, by the recent
research, among the main limiting factors on the performance of the proposed
models. Motivated by the need to address the problem of missing data in
building operation, this work presents a data-driven approach to fill these
gaps. In this study, three different autoencoder neural networks are trained to
reconstruct missing short-term indoor environment data time-series in a data
set collected in an office building in Aachen, Germany. This consisted of a
four year-long monitoring campaign in and between the years 2014 and 2017, of
84 different rooms. The models are applicable for different time-series
obtained from room automation, such as indoor air temperature, relative
humidity and $CO_{2}$ data streams. The results prove that the proposed methods
outperform classic numerical approaches and they result in reconstructing the
corresponding variables with average RMSEs of 0.42 {\deg}C, 1.30 % and 78.41
ppm, respectively.
Related papers
- PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - Pushing the Limits of Pre-training for Time Series Forecasting in the
CloudOps Domain [54.67888148566323]
We introduce three large-scale time series forecasting datasets from the cloud operations domain.
We show it is a strong zero-shot baseline and benefits from further scaling, both in model and dataset size.
Accompanying these datasets and results is a suite of comprehensive benchmark results comparing classical and deep learning baselines to our pre-trained method.
arXiv Detail & Related papers (2023-10-08T08:09:51Z) - Mitigating Cold-start Forecasting using Cold Causal Demand Forecasting
Model [10.132124789018262]
We introduce the Cold Causal Demand Forecasting (CDF-cold) framework that integrates causal inference with deep learning-based models.
Our experiments demonstrate that the CDF-cold framework outperforms state-of-the-art forecasting models in predicting future values of multivariate time series data.
arXiv Detail & Related papers (2023-06-15T16:36:34Z) - ClimaX: A foundation model for weather and climate [51.208269971019504]
ClimaX is a deep learning model for weather and climate science.
It can be pre-trained with a self-supervised learning objective on climate datasets.
It can be fine-tuned to address a breadth of climate and weather tasks.
arXiv Detail & Related papers (2023-01-24T23:19:01Z) - Grouped self-attention mechanism for a memory-efficient Transformer [64.0125322353281]
Real-world tasks such as forecasting weather, electricity consumption, and stock market involve predicting data that vary over time.
Time-series data are generally recorded over a long period of observation with long sequences owing to their periodic characteristics and long-range dependencies over time.
We propose two novel modules, Grouped Self-Attention (GSA) and Compressed Cross-Attention (CCA)
Our proposed model efficiently exhibited reduced computational complexity and performance comparable to or better than existing methods.
arXiv Detail & Related papers (2022-10-02T06:58:49Z) - A data filling methodology for time series based on CNN and (Bi)LSTM
neural networks [0.0]
We develop two Deep Learning models aimed at filling data gaps in time series obtained from monitored apartments in Bolzano, Italy.
Our approach manages to capture the fluctuating nature of the data and shows good accuracy in reconstructing the target time series.
arXiv Detail & Related papers (2022-04-21T09:40:30Z) - Deep Generative model with Hierarchical Latent Factors for Time Series
Anomaly Detection [40.21502451136054]
This work presents DGHL, a new family of generative models for time series anomaly detection.
A top-down Convolution Network maps a novel hierarchical latent space to time series windows, exploiting temporal dynamics to encode information efficiently.
Our method outperformed current state-of-the-art models on four popular benchmark datasets.
arXiv Detail & Related papers (2022-02-15T17:19:44Z) - Towards physically consistent data-driven weather forecasting:
Integrating data assimilation with equivariance-preserving deep spatial
transformers [2.7998963147546148]
We propose 3 components to integrate with commonly used data-driven weather prediction models.
These components are 1) a deep spatial transformer added to latent space of U-NETs to preserve equivariance, 2) a data-assimilation algorithm to ingest noisy observations and improve the initial conditions for next forecasts, and 3) a multi-time-step algorithm, improving the accuracy of forecasts at short intervals.
arXiv Detail & Related papers (2021-03-16T23:15:00Z) - Radflow: A Recurrent, Aggregated, and Decomposable Model for Networks of
Time Series [77.47313102926017]
Radflow is a novel model for networks of time series that influence each other.
It embodies three key ideas: a recurrent neural network to obtain node embeddings that depend on time, the aggregation of the flow of influence from neighboring nodes with multi-head attention, and the multi-layer decomposition of time series.
We show that Radflow can learn different trends and seasonal patterns, that it is robust to missing nodes and edges, and that correlated temporal patterns among network neighbors reflect influence strength.
arXiv Detail & Related papers (2021-02-15T00:57:28Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Progressive Growing of Neural ODEs [7.558546277131641]
We propose a progressive learning paradigm of NODEs for long-term time series forecasting.
Specifically, following the principle of curriculum learning, we gradually increase the complexity of data and network capacity as training progresses.
Our experiments with both synthetic data and real traffic data (PeMS Bay Area traffic data) show that our training methodology consistently improves the performance of vanilla NODEs by over 64%.
arXiv Detail & Related papers (2020-03-08T01:15:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.