End-to-end deep metamodeling to calibrate and optimize energy loads
- URL: http://arxiv.org/abs/2006.12390v1
- Date: Fri, 19 Jun 2020 07:40:11 GMT
- Title: End-to-end deep metamodeling to calibrate and optimize energy loads
- Authors: Max Cohen (TSP, IP Paris, SAMOVAR), Maurice Charbit (LTCI), Sylvain Le
Corff (TSP, IP Paris, SAMOVAR), Marius Preda (TSP, IP Paris, SAMOVAR), Gilles
Nozi\`ere
- Abstract summary: We propose a new end-to-end methodology to optimize the energy performance and the comfort, air quality and hygiene of large buildings.
A metamodel based on a Transformer network is introduced and trained using a dataset sampled with a simulation program.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a new end-to-end methodology to optimize the energy
performance and the comfort, air quality and hygiene of large buildings. A
metamodel based on a Transformer network is introduced and trained using a
dataset sampled with a simulation program. Then, a few physical parameters and
the building management system settings of this metamodel are calibrated using
the CMA-ES optimization algorithm and real data obtained from sensors. Finally,
the optimal settings to minimize the energy loads while maintaining a target
thermal comfort and air quality are obtained using a multi-objective
optimization procedure. The numerical experiments illustrate how this metamodel
ensures a significant gain in energy efficiency while being computationally
much more appealing than models requiring a huge number of physical parameters
to be estimated.
Related papers
- Automated Computational Energy Minimization of ML Algorithms using Constrained Bayesian Optimization [1.2891210250935148]
We evaluate Constrained Bayesian Optimization (CBO) with the primary objective of minimizing energy consumption.
We demonstrate that CBO lower energy consumption without compromising the predictive performance of ML models.
arXiv Detail & Related papers (2024-07-08T09:49:38Z) - Gradual Optimization Learning for Conformational Energy Minimization [69.36925478047682]
Gradual Optimization Learning Framework (GOLF) for energy minimization with neural networks significantly reduces the required additional data.
Our results demonstrate that the neural network trained with GOLF performs on par with the oracle on a benchmark of diverse drug-like molecules.
arXiv Detail & Related papers (2023-11-05T11:48:08Z) - E^2VPT: An Effective and Efficient Approach for Visual Prompt Tuning [55.50908600818483]
Fine-tuning large-scale pretrained vision models for new tasks has become increasingly parameter-intensive.
We propose an Effective and Efficient Visual Prompt Tuning (E2VPT) approach for large-scale transformer-based model adaptation.
Our approach outperforms several state-of-the-art baselines on two benchmarks.
arXiv Detail & Related papers (2023-07-25T19:03:21Z) - Design Amortization for Bayesian Optimal Experimental Design [70.13948372218849]
We build off of successful variational approaches, which optimize a parameterized variational model with respect to bounds on the expected information gain (EIG)
We present a novel neural architecture that allows experimenters to optimize a single variational model that can estimate the EIG for potentially infinitely many designs.
arXiv Detail & Related papers (2022-10-07T02:12:34Z) - Artificial Intelligence-Assisted Optimization and Multiphase Analysis of
Polygon PEM Fuel Cells [0.0]
The models have been optimized after achieving improved cell performance.
The optimized Hexagonal and Pentagonal increase the output current density by 21.8% and 39.9%, respectively.
arXiv Detail & Related papers (2022-04-10T04:49:10Z) - Automatic prior selection for meta Bayesian optimization with a case
study on tuning deep neural network optimizers [47.013395100497775]
We propose a principled approach to solve such expensive hyperparameter tuning problems efficiently.
Key to the performance of BO is specifying and refining a distribution over functions, which is used to reason about the optima of the underlying function being optimized.
We verify our approach in realistic model training setups by training tens of thousands of configurations of near-state-of-the-art models on popular image and text datasets.
arXiv Detail & Related papers (2021-09-16T20:46:26Z) - Conservative Objective Models for Effective Offline Model-Based
Optimization [78.19085445065845]
Computational design problems arise in a number of settings, from synthetic biology to computer architectures.
We propose a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs.
COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems.
arXiv Detail & Related papers (2021-07-14T17:55:28Z) - End-to-end deep meta modelling to calibrate and optimize energy
consumption and comfort [0.0]
We introduce a metamodel based on recurrent neural networks and trained to predict the behavior of a general class of buildings.
Parameters are estimated by comparing the predictions of the metamodel with real data obtained from sensors.
Energy consumptions are optimized while maintaining a target thermal comfort and air quality.
arXiv Detail & Related papers (2021-02-01T10:21:09Z) - Bayesian Optimization for Selecting Efficient Machine Learning Models [53.202224677485525]
We present a unified Bayesian Optimization framework for jointly optimizing models for both prediction effectiveness and training efficiency.
Experiments on model selection for recommendation tasks indicate models selected this way significantly improves model training efficiency.
arXiv Detail & Related papers (2020-08-02T02:56:30Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Sample-Efficient Optimization in the Latent Space of Deep Generative
Models via Weighted Retraining [1.5293427903448025]
We introduce an improved method for efficient black-box optimization, which performs the optimization in the low-dimensional, continuous latent manifold learned by a deep generative model.
We achieve this by periodically retraining the generative model on the data points queried along the optimization trajectory, as well as weighting those data points according to their objective function value.
This weighted retraining can be easily implemented on top of existing methods, and is empirically shown to significantly improve their efficiency and performance on synthetic and real-world optimization problems.
arXiv Detail & Related papers (2020-06-16T14:34:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.