End-to-end deep metamodeling to calibrate and optimize energy loads
- URL: http://arxiv.org/abs/2006.12390v1
- Date: Fri, 19 Jun 2020 07:40:11 GMT
- Title: End-to-end deep metamodeling to calibrate and optimize energy loads
- Authors: Max Cohen (TSP, IP Paris, SAMOVAR), Maurice Charbit (LTCI), Sylvain Le
Corff (TSP, IP Paris, SAMOVAR), Marius Preda (TSP, IP Paris, SAMOVAR), Gilles
Nozi\`ere
- Abstract summary: We propose a new end-to-end methodology to optimize the energy performance and the comfort, air quality and hygiene of large buildings.
A metamodel based on a Transformer network is introduced and trained using a dataset sampled with a simulation program.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a new end-to-end methodology to optimize the energy
performance and the comfort, air quality and hygiene of large buildings. A
metamodel based on a Transformer network is introduced and trained using a
dataset sampled with a simulation program. Then, a few physical parameters and
the building management system settings of this metamodel are calibrated using
the CMA-ES optimization algorithm and real data obtained from sensors. Finally,
the optimal settings to minimize the energy loads while maintaining a target
thermal comfort and air quality are obtained using a multi-objective
optimization procedure. The numerical experiments illustrate how this metamodel
ensures a significant gain in energy efficiency while being computationally
much more appealing than models requiring a huge number of physical parameters
to be estimated.
Related papers
- Incremental Few-Shot Adaptation for Non-Prehensile Object Manipulation using Parallelizable Physics Simulators [5.483662156126757]
We propose a novel approach for non-prehensile manipulation which iteratively adapts a physics-based dynamics model for model-predictive control.
We adapt the parameters of the model incrementally with a few examples of robot-object interactions.
We evaluate our few-shot adaptation approach in several object pushing experiments in simulation and with a real robot.
arXiv Detail & Related papers (2024-09-20T05:24:25Z) - Impact of ML Optimization Tactics on Greener Pre-Trained ML Models [46.78148962732881]
This study aims to (i) analyze image classification datasets and pre-trained models, (ii) improve inference efficiency by comparing optimized and non-optimized models, and (iii) assess the economic impact of the optimizations.
We conduct a controlled experiment to evaluate the impact of various PyTorch optimization techniques (dynamic quantization, torch.compile, local pruning, and global pruning) to 42 Hugging Face models for image classification.
Dynamic quantization demonstrates significant reductions in inference time and energy consumption, making it highly suitable for large-scale systems.
arXiv Detail & Related papers (2024-09-19T16:23:03Z) - Automated Computational Energy Minimization of ML Algorithms using Constrained Bayesian Optimization [1.2891210250935148]
We evaluate Constrained Bayesian Optimization (CBO) with the primary objective of minimizing energy consumption.
We demonstrate that CBO lower energy consumption without compromising the predictive performance of ML models.
arXiv Detail & Related papers (2024-07-08T09:49:38Z) - Gradual Optimization Learning for Conformational Energy Minimization [69.36925478047682]
Gradual Optimization Learning Framework (GOLF) for energy minimization with neural networks significantly reduces the required additional data.
Our results demonstrate that the neural network trained with GOLF performs on par with the oracle on a benchmark of diverse drug-like molecules.
arXiv Detail & Related papers (2023-11-05T11:48:08Z) - E^2VPT: An Effective and Efficient Approach for Visual Prompt Tuning [55.50908600818483]
Fine-tuning large-scale pretrained vision models for new tasks has become increasingly parameter-intensive.
We propose an Effective and Efficient Visual Prompt Tuning (E2VPT) approach for large-scale transformer-based model adaptation.
Our approach outperforms several state-of-the-art baselines on two benchmarks.
arXiv Detail & Related papers (2023-07-25T19:03:21Z) - Design Amortization for Bayesian Optimal Experimental Design [70.13948372218849]
We build off of successful variational approaches, which optimize a parameterized variational model with respect to bounds on the expected information gain (EIG)
We present a novel neural architecture that allows experimenters to optimize a single variational model that can estimate the EIG for potentially infinitely many designs.
arXiv Detail & Related papers (2022-10-07T02:12:34Z) - Conservative Objective Models for Effective Offline Model-Based
Optimization [78.19085445065845]
Computational design problems arise in a number of settings, from synthetic biology to computer architectures.
We propose a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs.
COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems.
arXiv Detail & Related papers (2021-07-14T17:55:28Z) - End-to-end deep meta modelling to calibrate and optimize energy
consumption and comfort [0.0]
We introduce a metamodel based on recurrent neural networks and trained to predict the behavior of a general class of buildings.
Parameters are estimated by comparing the predictions of the metamodel with real data obtained from sensors.
Energy consumptions are optimized while maintaining a target thermal comfort and air quality.
arXiv Detail & Related papers (2021-02-01T10:21:09Z) - Bayesian Optimization for Selecting Efficient Machine Learning Models [53.202224677485525]
We present a unified Bayesian Optimization framework for jointly optimizing models for both prediction effectiveness and training efficiency.
Experiments on model selection for recommendation tasks indicate models selected this way significantly improves model training efficiency.
arXiv Detail & Related papers (2020-08-02T02:56:30Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Sample-Efficient Optimization in the Latent Space of Deep Generative
Models via Weighted Retraining [1.5293427903448025]
We introduce an improved method for efficient black-box optimization, which performs the optimization in the low-dimensional, continuous latent manifold learned by a deep generative model.
We achieve this by periodically retraining the generative model on the data points queried along the optimization trajectory, as well as weighting those data points according to their objective function value.
This weighted retraining can be easily implemented on top of existing methods, and is empirically shown to significantly improve their efficiency and performance on synthetic and real-world optimization problems.
arXiv Detail & Related papers (2020-06-16T14:34:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.