Question Answering for Decisionmaking in Green Building Design: A Multimodal Data Reasoning Method Driven by Large Language Models
- URL: http://arxiv.org/abs/2412.04741v1
- Date: Fri, 06 Dec 2024 03:02:58 GMT
- Title: Question Answering for Decisionmaking in Green Building Design: A Multimodal Data Reasoning Method Driven by Large Language Models
- Authors: Yihui Li, Xiaoyue Yan, Hao Zhou, Borong Lin,
- Abstract summary: Decision-making in green building design (DGBD) is crucial for reducing building energy costs.
Many studies have already applied artificial intelligence (AI) methods to this field.
This study innovatively integrates large language models with DGBD, creating GreenQA, a question answering framework for multimodal data reasoning.
- Score: 13.501199715254906
- License:
- Abstract: In recent years, the critical role of green buildings in addressing energy consumption and environmental issues has become widely acknowledged. Research indicates that over 40% of potential energy savings can be achieved during the early design stage. Therefore, decision-making in green building design (DGBD), which is based on modeling and performance simulation, is crucial for reducing building energy costs. However, the field of green building encompasses a broad range of specialized knowledge, which involves significant learning costs and results in low decision-making efficiency. Many studies have already applied artificial intelligence (AI) methods to this field. Based on previous research, this study innovatively integrates large language models with DGBD, creating GreenQA, a question answering framework for multimodal data reasoning. Utilizing Retrieval Augmented Generation, Chain of Thought, and Function Call methods, GreenQA enables multimodal question answering, including weather data analysis and visualization, retrieval of green building cases, and knowledge query. Additionally, this study conducted a user survey using the GreenQA web platform. The results showed that 96% of users believed the platform helped improve design efficiency. This study not only effectively supports DGBD but also provides inspiration for AI-assisted design.
Related papers
- Addressing the sustainable AI trilemma: a case study on LLM agents and RAG [7.6212949300713015]
Large language models (LLMs) have demonstrated significant capabilities, but their widespread deployment and more advanced applications raise critical sustainability challenges.
We propose the concept of the Sustainable AI Trilemma, highlighting the tensions between AI capability, digital equity, and environmental sustainability.
arXiv Detail & Related papers (2025-01-14T17:21:16Z) - Darkit: A User-Friendly Software Toolkit for Spiking Large Language Model [50.37090759139591]
Large language models (LLMs) have been widely applied in various practical applications, typically comprising billions of parameters.
The human brain, employing bio-plausible spiking mechanisms, can accomplish the same tasks while significantly reducing energy consumption.
We are releasing a software toolkit named DarwinKit (Darkit) to accelerate the adoption of brain-inspired large language models.
arXiv Detail & Related papers (2024-12-20T07:50:08Z) - Green Federated Learning: A new era of Green Aware AI [11.536626349203361]
Federated Learning (FL) presents new opportunities to address this need.
It's crucial to furnish researchers, stakeholders, and interested parties with a roadmap to navigate and understand existing efforts and gaps in green-aware AI algorithms.
This survey primarily aims to achieve this objective by identifying and analyzing over a hundred FL works.
arXiv Detail & Related papers (2024-09-19T09:54:18Z) - Learn to Code Sustainably: An Empirical Study on LLM-based Green Code
Generation [7.8273713434806345]
We evaluate the sustainability of auto-generate codes produced by generative commercial AI language models.
We compare the performance and green capacity of human-generated code and code generated by the three AI language models.
arXiv Detail & Related papers (2024-03-05T22:12:01Z) - The Efficiency Spectrum of Large Language Models: An Algorithmic Survey [54.19942426544731]
The rapid growth of Large Language Models (LLMs) has been a driving force in transforming various domains.
This paper examines the multi-faceted dimensions of efficiency essential for the end-to-end algorithmic development of LLMs.
arXiv Detail & Related papers (2023-12-01T16:00:25Z) - Power Hungry Processing: Watts Driving the Cost of AI Deployment? [74.19749699665216]
generative, multi-purpose AI systems promise a unified approach to building machine learning (ML) models into technology.
This ambition of generality'' comes at a steep cost to the environment, given the amount of energy these systems require and the amount of carbon that they emit.
We measure deployment cost as the amount of energy and carbon required to perform 1,000 inferences on representative benchmark dataset using these models.
We conclude with a discussion around the current trend of deploying multi-purpose generative ML systems, and caution that their utility should be more intentionally weighed against increased costs in terms of energy and emissions
arXiv Detail & Related papers (2023-11-28T15:09:36Z) - Exploring the Privacy-Energy Consumption Tradeoff for Split Federated Learning [51.02352381270177]
Split Federated Learning (SFL) has recently emerged as a promising distributed learning technology.
The choice of the cut layer in SFL can have a substantial impact on the energy consumption of clients and their privacy.
This article provides a comprehensive overview of the SFL process and thoroughly analyze energy consumption and privacy.
arXiv Detail & Related papers (2023-11-15T23:23:42Z) - On the Opportunities of Green Computing: A Survey [80.21955522431168]
Artificial Intelligence (AI) has achieved significant advancements in technology and research with the development over several decades.
The needs for high computing power brings higher carbon emission and undermines research fairness.
To tackle the challenges of computing resources and environmental impact of AI, Green Computing has become a hot research topic.
arXiv Detail & Related papers (2023-11-01T11:16:41Z) - A Comparative Study of Machine Learning Algorithms for Anomaly Detection
in Industrial Environments: Performance and Environmental Impact [62.997667081978825]
This study seeks to address the demands of high-performance machine learning models with environmental sustainability.
Traditional machine learning algorithms, such as Decision Trees and Random Forests, demonstrate robust efficiency and performance.
However, superior outcomes were obtained with optimised configurations, albeit with a commensurate increase in resource consumption.
arXiv Detail & Related papers (2023-07-01T15:18:00Z) - A survey on Variational Autoencoders from a GreenAI perspective [0.0]
Variational AutoEncoders (VAEs) are powerful generative models that merge elements from statistics and information theory with the flexibility offered by deep neural networks.
This article provides a comparative evaluation of some of the most successful, recent variations of VAEs.
arXiv Detail & Related papers (2021-03-01T15:26:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.