Stacked Boosters Network Architecture for Short Term Load Forecasting in
Buildings
- URL: http://arxiv.org/abs/2001.08406v2
- Date: Thu, 16 Apr 2020 05:20:40 GMT
- Title: Stacked Boosters Network Architecture for Short Term Load Forecasting in
Buildings
- Authors: Tuukka Salmi, Jussi Kiljander and Daniel Pakkala
- Abstract summary: This paper presents a novel deep learning architecture for short term load forecasting of building energy loads.
The architecture is based on a simple base learner and multiple boosting systems that are modelled as a single deep neural network.
The architecture is evaluated in several short-term load forecasting tasks with energy data from an office building in Finland.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a novel deep learning architecture for short term load
forecasting of building energy loads. The architecture is based on a simple
base learner and multiple boosting systems that are modelled as a single deep
neural network. The architecture transforms the original multivariate time
series into multiple cascading univariate time series. Together with sparse
interactions, parameter sharing and equivariant representations, this approach
makes it possible to combat against overfitting while still achieving good
presentation power with a deep network architecture. The architecture is
evaluated in several short-term load forecasting tasks with energy data from an
office building in Finland. The proposed architecture outperforms
state-of-the-art load forecasting model in all the tasks.
Related papers
- Exploring the design space of deep-learning-based weather forecasting systems [56.129148006412855]
This paper systematically analyzes the impact of different design choices on deep-learning-based weather forecasting systems.
We study fixed-grid architectures such as UNet, fully convolutional architectures, and transformer-based models.
We propose a hybrid system that combines the strong performance of fixed-grid models with the flexibility of grid-invariant architectures.
arXiv Detail & Related papers (2024-10-09T22:25:50Z) - Optimizing Time Series Forecasting Architectures: A Hierarchical Neural Architecture Search Approach [17.391148813359088]
We propose a novel hierarchical neural architecture search approach for time series forecasting tasks.
With the design of a hierarchical search space, we incorporate many architecture types designed for forecasting tasks.
Results on long-term-time-series-forecasting tasks show that our approach can search for lightweight high-performing forecasting architectures.
arXiv Detail & Related papers (2024-06-07T17:02:37Z) - Partially Stochastic Infinitely Deep Bayesian Neural Networks [0.0]
We present a novel family of architectures that integrates partiality into the framework of infinitely deep neural networks.
We leverage the advantages of partiality in the infinite-depth limit which include the benefits of fullity.
We present a variety of architectural configurations, offering flexibility in network design.
arXiv Detail & Related papers (2024-02-05T20:15:19Z) - Hysteretic Behavior Simulation Based on Pyramid Neural
Network:Principle, Network Architecture, Case Study and Explanation [0.0]
A surrogate model based on neural networks shows significant potential in balancing efficiency and accuracy.
Its serial information flow and prediction based on single-level features adversely affect the network performance.
A weighted stacked pyramid neural network architecture is proposed herein.
arXiv Detail & Related papers (2022-04-29T16:42:00Z) - TMS: A Temporal Multi-scale Backbone Design for Speaker Embedding [60.292702363839716]
Current SOTA backbone networks for speaker embedding are designed to aggregate multi-scale features from an utterance with multi-branch network architectures for speaker representation.
We propose an effective temporal multi-scale (TMS) model where multi-scale branches could be efficiently designed in a speaker embedding network almost without increasing computational costs.
arXiv Detail & Related papers (2022-03-17T05:49:35Z) - Elastic Architecture Search for Diverse Tasks with Different Resources [87.23061200971912]
We study a new challenging problem of efficient deployment for diverse tasks with different resources, where the resource constraint and task of interest corresponding to a group of classes are dynamically specified at testing time.
Previous NAS approaches seek to design architectures for all classes simultaneously, which may not be optimal for some individual tasks.
We present a novel and general framework, called Elastic Architecture Search (EAS), permitting instant specializations at runtime for diverse tasks with various resource constraints.
arXiv Detail & Related papers (2021-08-03T00:54:27Z) - Automated Search for Resource-Efficient Branched Multi-Task Networks [81.48051635183916]
We propose a principled approach, rooted in differentiable neural architecture search, to automatically define branching structures in a multi-task neural network.
We show that our approach consistently finds high-performing branching structures within limited resource budgets.
arXiv Detail & Related papers (2020-08-24T09:49:19Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z) - Stage-Wise Neural Architecture Search [65.03109178056937]
Modern convolutional networks such as ResNet and NASNet have achieved state-of-the-art results in many computer vision applications.
These networks consist of stages, which are sets of layers that operate on representations in the same resolution.
It has been demonstrated that increasing the number of layers in each stage improves the prediction ability of the network.
However, the resulting architecture becomes computationally expensive in terms of floating point operations, memory requirements and inference time.
arXiv Detail & Related papers (2020-04-23T14:16:39Z) - ForecastNet: A Time-Variant Deep Feed-Forward Neural Network
Architecture for Multi-Step-Ahead Time-Series Forecasting [6.043572971237165]
We propose ForecastNet, which uses a deep feed-forward architecture to provide a time-variant model.
ForecastNet is demonstrated to outperform statistical and deep learning benchmark models on several datasets.
arXiv Detail & Related papers (2020-02-11T01:03:33Z) - Residual Attention Net for Superior Cross-Domain Time Sequence Modeling [0.0]
This paper serves as a proof-of-concept for a new architecture, with RAN aiming at providing the model a higher level understanding of sequence patterns.
We have achieved 35 state-of-the-art results with 10 results matching current state-of-the-art results without further model fine-tuning.
arXiv Detail & Related papers (2020-01-13T06:14:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.