Systematic review of deep learning and machine learning for building
energy
- URL: http://arxiv.org/abs/2202.12269v1
- Date: Wed, 23 Feb 2022 13:40:45 GMT
- Title: Systematic review of deep learning and machine learning for building
energy
- Authors: Ardabili Sina, Leila Abdolalizadeh, Csaba Mako, Bernat Torok, Mosavi
Amir
- Abstract summary: Building energy (BE) management has an essential role in urban sustainability and smart cities.
The machine learning (ML) and deep learning (DL) methods and applications have been promising for the advancement of the accurate and high-performance energy models.
The present study provides a comprehensive review of ML and DL-based techniques applied for handling BE systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The building energy (BE) management has an essential role in urban
sustainability and smart cities. Recently, the novel data science and
data-driven technologies have shown significant progress in analyzing the
energy consumption and energy demand data sets for a smarter energy management.
The machine learning (ML) and deep learning (DL) methods and applications, in
particular, have been promising for the advancement of the accurate and
high-performance energy models. The present study provides a comprehensive
review of ML and DL-based techniques applied for handling BE systems, and it
further evaluates the performance of these techniques. Through a systematic
review and a comprehensive taxonomy, the advances of ML and DL-based techniques
are carefully investigated, and the promising models are introduced. According
to the results obtained for energy demand forecasting, the hybrid and ensemble
methods are located in high robustness range, SVM-based methods are located in
good robustness limitation, ANN-based methods are located in medium robustness
limitation and linear regression models are located in low robustness
limitations. On the other hand, for energy consumption forecasting, DL-based,
hybrid, and ensemble-based models provided the highest robustness score. ANN,
SVM, and single ML models provided good and medium robustness and LR-based
models provided the lower robustness score. In addition, for energy load
forecasting, LR-based models provided the lower robustness score. The hybrid
and ensemble-based models provided a higher robustness score. The DL-based and
SVM-based techniques provided a good robustness score and ANN-based techniques
provided a medium robustness score.
Related papers
- Benchmarks as Microscopes: A Call for Model Metrology [76.64402390208576]
Modern language models (LMs) pose a new challenge in capability assessment.
To be confident in our metrics, we need a new discipline of model metrology.
arXiv Detail & Related papers (2024-07-22T17:52:12Z) - EcoMLS: A Self-Adaptation Approach for Architecting Green ML-Enabled Systems [1.0923877073891446]
Self-adaptation techniques, recognized for their potential in energy savings within software systems, have yet to be extensively explored in Machine Learning-Enabled Systems.
This research underscores the feasibility of enhancing MLS sustainability through intelligent runtime adaptations.
arXiv Detail & Related papers (2024-04-17T14:12:47Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Machine Unlearning of Pre-trained Large Language Models [17.40601262379265]
This study investigates the concept of the right to be forgotten' within the context of large language models (LLMs)
We explore machine unlearning as a pivotal solution, with a focus on pre-trained models.
arXiv Detail & Related papers (2024-02-23T07:43:26Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - Retrieval-based Knowledge Transfer: An Effective Approach for Extreme
Large Language Model Compression [64.07696663255155]
Large-scale pre-trained language models (LLMs) have demonstrated exceptional performance in various natural language processing (NLP) tasks.
However, the massive size of these models poses huge challenges for their deployment in real-world applications.
We introduce a novel compression paradigm called Retrieval-based Knowledge Transfer (RetriKT) which effectively transfers the knowledge of LLMs to extremely small-scale models.
arXiv Detail & Related papers (2023-10-24T07:58:20Z) - MATNet: Multi-Level Fusion Transformer-Based Model for Day-Ahead PV
Generation Forecasting [0.47518865271427785]
MATNet is a novel self-attention transformer-based architecture for PV power generation forecasting.
It consists of a hybrid approach that combines the AI paradigm with the prior physical knowledge of PV power generation.
Results show that our proposed architecture significantly outperforms the current state-of-the-art methods.
arXiv Detail & Related papers (2023-06-17T14:03:09Z) - Robustness and Generalization Performance of Deep Learning Models on
Cyber-Physical Systems: A Comparative Study [71.84852429039881]
Investigation focuses on the models' ability to handle a range of perturbations, such as sensor faults and noise.
We test the generalization and transfer learning capabilities of these models by exposing them to out-of-distribution (OOD) samples.
arXiv Detail & Related papers (2023-06-13T12:43:59Z) - Efficient Model-based Multi-agent Reinforcement Learning via Optimistic
Equilibrium Computation [93.52573037053449]
H-MARL (Hallucinated Multi-Agent Reinforcement Learning) learns successful equilibrium policies after a few interactions with the environment.
We demonstrate our approach experimentally on an autonomous driving simulation benchmark.
arXiv Detail & Related papers (2022-03-14T17:24:03Z) - An Adaptive Deep Learning Framework for Day-ahead Forecasting of
Photovoltaic Power Generation [0.8702432681310401]
This paper proposes an adaptive LSTM (AD-LSTM) model, which is a DL framework that can not only acquire general knowledge from historical data, but also dynamically learn specific knowledge from newly-arrived data.
The developed AD-LSTM model demonstrates greater forecasting capability than the offline LSTM model, particularly in the presence of concept drift.
arXiv Detail & Related papers (2021-09-28T02:39:56Z) - Energy Forecasting in Smart Grid Systems: A Review of the
State-of-the-art Techniques [2.3436632098950456]
This paper presents a review of state-of-the-art forecasting methods for smart grid (SG) systems.
Traditional point forecasting methods including statistical, machine learning (ML), and deep learning (DL) are extensively investigated.
A comparative case study using the Victorian electricity consumption and American electric power (AEP) is conducted.
arXiv Detail & Related papers (2020-11-25T09:17:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.