End-to-end deep meta modelling to calibrate and optimize energy
consumption and comfort
- URL: http://arxiv.org/abs/2105.02814v2
- Date: Fri, 5 Nov 2021 09:33:37 GMT
- Title: End-to-end deep meta modelling to calibrate and optimize energy
consumption and comfort
- Authors: Max Cohen (IP Paris, CITI, TIPIC-SAMOVAR), Sylvain Le Corff (IP Paris,
CITI, TIPIC-SAMOVAR), Maurice Charbit, Marius Preda (IP Paris, ARTEMIS,
ARMEDIA-SAMOVAR), Gilles Nozi\`ere
- Abstract summary: We introduce a metamodel based on recurrent neural networks and trained to predict the behavior of a general class of buildings.
Parameters are estimated by comparing the predictions of the metamodel with real data obtained from sensors.
Energy consumptions are optimized while maintaining a target thermal comfort and air quality.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a new end-to-end methodology to optimize the energy
performance as well as comfort and air quality in large buildings without any
renovation work. We introduce a metamodel based on recurrent neural networks
and trained to predict the behavior of a general class of buildings using a
database sampled from a simulation program. This metamodel is then deployed in
different frameworks and its parameters are calibrated using the specific data
of two real buildings. Parameters are estimated by comparing the predictions of
the metamodel with real data obtained from sensors using the CMA-ES algorithm,
a derivative free optimization procedure. Then, energy consumptions are
optimized while maintaining a target thermal comfort and air quality, using the
NSGA-II multi-objective optimization procedure. The numerical experiments
illustrate how this metamodel ensures a significant gain in energy efficiency,
up to almost 10%, while being computationally much more appealing than
numerical models and flexible enough to be adapted to several types of
buildings.
Related papers
- SaRA: High-Efficient Diffusion Model Fine-tuning with Progressive Sparse Low-Rank Adaptation [52.6922833948127]
In this work, we investigate the importance of parameters in pre-trained diffusion models.
We propose a novel model fine-tuning method to make full use of these ineffective parameters.
Our method enhances the generative capabilities of pre-trained models in downstream applications.
arXiv Detail & Related papers (2024-09-10T16:44:47Z) - Edge-Efficient Deep Learning Models for Automatic Modulation Classification: A Performance Analysis [0.7428236410246183]
We investigate optimized convolutional neural networks (CNNs) developed for automatic modulation classification (AMC) of wireless signals.
We propose optimized models with the combinations of these techniques to fuse the complementary optimization benefits.
The experimental results show that the proposed individual and combined optimization techniques are highly effective for developing models with significantly less complexity.
arXiv Detail & Related papers (2024-04-11T06:08:23Z) - Fairer and More Accurate Tabular Models Through NAS [14.147928131445852]
We propose using multi-objective Neural Architecture Search (NAS) and Hyperparameter Optimization (HPO) in the first application to the very challenging domain of tabular data.
We show that models optimized solely for accuracy with NAS often fail to inherently address fairness concerns.
We produce architectures that consistently dominate state-of-the-art bias mitigation methods either in fairness, accuracy or both.
arXiv Detail & Related papers (2023-10-18T17:56:24Z) - Design Amortization for Bayesian Optimal Experimental Design [70.13948372218849]
We build off of successful variational approaches, which optimize a parameterized variational model with respect to bounds on the expected information gain (EIG)
We present a novel neural architecture that allows experimenters to optimize a single variational model that can estimate the EIG for potentially infinitely many designs.
arXiv Detail & Related papers (2022-10-07T02:12:34Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Artificial Intelligence-Assisted Optimization and Multiphase Analysis of
Polygon PEM Fuel Cells [0.0]
The models have been optimized after achieving improved cell performance.
The optimized Hexagonal and Pentagonal increase the output current density by 21.8% and 39.9%, respectively.
arXiv Detail & Related papers (2022-04-10T04:49:10Z) - Development of a hybrid machine-learning and optimization tool for
performance-based solar shading design [0.0]
This research includes 87912 alternatives and six calculated metrics introduced to optimized machine learning models.
The most accurate and fastest estimation model was Random Forrest, with an r2_score of 0.967 to 1.
The developed tool can evaluate various design alternatives in less than a few seconds for each.
arXiv Detail & Related papers (2022-01-09T14:54:33Z) - MoEfication: Conditional Computation of Transformer Models for Efficient
Inference [66.56994436947441]
Transformer-based pre-trained language models can achieve superior performance on most NLP tasks due to large parameter capacity, but also lead to huge computation cost.
We explore to accelerate large-model inference by conditional computation based on the sparse activation phenomenon.
We propose to transform a large model into its mixture-of-experts (MoE) version with equal model size, namely MoEfication.
arXiv Detail & Related papers (2021-10-05T02:14:38Z) - Bayesian Optimization for Selecting Efficient Machine Learning Models [53.202224677485525]
We present a unified Bayesian Optimization framework for jointly optimizing models for both prediction effectiveness and training efficiency.
Experiments on model selection for recommendation tasks indicate models selected this way significantly improves model training efficiency.
arXiv Detail & Related papers (2020-08-02T02:56:30Z) - End-to-end deep metamodeling to calibrate and optimize energy loads [0.0]
We propose a new end-to-end methodology to optimize the energy performance and the comfort, air quality and hygiene of large buildings.
A metamodel based on a Transformer network is introduced and trained using a dataset sampled with a simulation program.
arXiv Detail & Related papers (2020-06-19T07:40:11Z) - Highly Efficient Salient Object Detection with 100K Parameters [137.74898755102387]
We propose a flexible convolutional module, namely generalized OctConv (gOctConv), to efficiently utilize both in-stage and cross-stages multi-scale features.
We build an extremely light-weighted model, namely CSNet, which achieves comparable performance with about 0.2% (100k) of large models on popular object detection benchmarks.
arXiv Detail & Related papers (2020-03-12T07:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.