BuildEvo: Designing Building Energy Consumption Forecasting Heuristics via LLM-driven Evolution
- URL: http://arxiv.org/abs/2507.12207v1
- Date: Wed, 16 Jul 2025 13:07:24 GMT
- Title: BuildEvo: Designing Building Energy Consumption Forecasting Heuristics via LLM-driven Evolution
- Authors: Subin Lin, Chuanbo Hua,
- Abstract summary: This paper introduces BuildEvo, a novel framework that uses Large Language Models (LLMs) to automatically design effective and interpretable energy predictions.<n>Within an evolutionary process, BuildEvo guides LLMs to construct and enhances by systematically incorporating physical insights from building characteristics and operational data.<n> Evaluations show BuildEvo achieves state-of-the-art performance on benchmarks, offering improved generalization and transparent prediction logic.
- Score: 0.7366405857677227
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate building energy forecasting is essential, yet traditional heuristics often lack precision, while advanced models can be opaque and struggle with generalization by neglecting physical principles. This paper introduces BuildEvo, a novel framework that uses Large Language Models (LLMs) to automatically design effective and interpretable energy prediction heuristics. Within an evolutionary process, BuildEvo guides LLMs to construct and enhance heuristics by systematically incorporating physical insights from building characteristics and operational data (e.g., from the Building Data Genome Project 2). Evaluations show BuildEvo achieves state-of-the-art performance on benchmarks, offering improved generalization and transparent prediction logic. This work advances the automated design of robust, physically grounded heuristics, promoting trustworthy models for complex energy systems.
Related papers
- EyeSim-VQA: A Free-Energy-Guided Eye Simulation Framework for Video Quality Assessment [68.77813885751308]
EyeSimVQA is a novel VQA framework that incorporates free-energy-based self-repair.<n>We show EyeSimVQA achieves competitive or superior performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2025-06-13T08:00:54Z) - Triple Attention Transformer Architecture for Time-Dependent Concrete Creep Prediction [0.0]
This paper presents a novel Triple Attention Transformer Architecture for predicting time-dependent concrete creep.<n>By transforming concrete creep prediction into an autoregressive sequence modeling task similar to language processing, our architecture leverages the transformer's self-attention mechanisms.<n>The architecture achieves exceptional performance with mean absolute percentage error of 1.63% and R2 values of 0.999 across all datasets.
arXiv Detail & Related papers (2025-05-28T22:30:35Z) - Exploring Capabilities of Time Series Foundation Models in Building Analytics [15.525789412274587]
Internet of Things (IoT) networks have transformed the management and optimization of building energy consumption.
We conduct a comprehensive benchmarking of two publicly available IoT datasets.
Our analysis shows that single-modal models demonstrate significant promise in overcoming the complexities of data variability and physical limitations in buildings.
arXiv Detail & Related papers (2024-10-28T02:49:22Z) - Large Language Model Agent as a Mechanical Designer [7.136205674624813]
We propose a framework that leverages a pretrained Large Language Model (LLM) in conjunction with an FEM module to autonomously generate, evaluate, and refine structural designs.<n>LLM operates without domain-specific fine-tuning, using general reasoning to propose design candidates, interpret FEM-derived performance metrics, and apply structurally sound modifications.<n>Compared to Non- Sorting Genetic Algorithm II (NSGA-II), our method achieves faster convergence and fewer FEM evaluations.
arXiv Detail & Related papers (2024-04-26T16:41:24Z) - Energy Transformer [64.22957136952725]
Our work combines aspects of three promising paradigms in machine learning, namely, attention mechanism, energy-based models, and associative memory.
We propose a novel architecture, called the Energy Transformer (or ET for short), that uses a sequence of attention layers that are purposely designed to minimize a specifically engineered energy function.
arXiv Detail & Related papers (2023-02-14T18:51:22Z) - A Dynamic Feedforward Control Strategy for Energy-efficient Building
System Operation [59.56144813928478]
In current control strategies and optimization algorithms, most of them rely on receiving information from real-time feedback.
We propose an engineer-friendly control strategy framework that embeds dynamic prior knowledge from building system characteristics simultaneously for system control.
We tested it in a case for heating system control with typical control strategies, which shows our framework owns a further energy-saving potential of 15%.
arXiv Detail & Related papers (2023-01-23T09:07:07Z) - Design Space Exploration and Explanation via Conditional Variational
Autoencoders in Meta-model-based Conceptual Design of Pedestrian Bridges [52.77024349608834]
This paper provides a performance-driven design exploration framework to augment the human designer through a Conditional Variational Autoencoder (CVAE)
The CVAE is trained on 18'000 synthetically generated instances of a pedestrian bridge in Switzerland.
arXiv Detail & Related papers (2022-11-29T17:28:31Z) - Adversarial Self-Attention for Language Understanding [89.265747130584]
This paper proposes textitAdversarial Self-Attention mechanism (ASA).
ASA adversarially reconstructs the Transformer attentions and facilitates model training from contaminated model structures.
For fine-tuning, ASA-empowered models consistently outweigh naive models by a large margin considering both generalization and robustness.
arXiv Detail & Related papers (2022-06-25T09:18:10Z) - Interpreting Machine Learning Models for Room Temperature Prediction in
Non-domestic Buildings [0.0]
This work presents an interpretable machine learning model aimed at predicting room temperature in non-domestic buildings.
We demonstrate experimentally that the proposed model can accurately forecast room temperatures eight hours ahead in real-time.
arXiv Detail & Related papers (2021-11-23T11:16:35Z) - Physics-Integrated Variational Autoencoders for Robust and Interpretable
Generative Modeling [86.9726984929758]
We focus on the integration of incomplete physics models into deep generative models.
We propose a VAE architecture in which a part of the latent space is grounded by physics.
We demonstrate generative performance improvements over a set of synthetic and real-world datasets.
arXiv Detail & Related papers (2021-02-25T20:28:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.