Predictive Modeling: BIM Command Recommendation Based on Large-scale Usage Logs
- URL: http://arxiv.org/abs/2504.05319v2
- Date: Sun, 13 Jul 2025 17:14:34 GMT
- Title: Predictive Modeling: BIM Command Recommendation Based on Large-scale Usage Logs
- Authors: Changyu Du, Zihan Deng, Stavros Nousias, André Borrmann,
- Abstract summary: We propose a BIM command recommendation framework that predicts the optimal next actions in real-time based on users' historical interactions.<n>Our model builds upon the state-of-the-art Transformer backbones originally developed for large language models.<n>When generating recommendations for the next command, our approach achieves a Recall@10 of approximately 84%.
- Score: 0.7499722271664147
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The adoption of Building Information Modeling (BIM) and model-based design within the Architecture, Engineering, and Construction (AEC) industry has been hindered by the perception that using BIM authoring tools demands more effort than conventional 2D drafting. To enhance design efficiency, this paper proposes a BIM command recommendation framework that predicts the optimal next actions in real-time based on users' historical interactions. We propose a comprehensive filtering and enhancement method for large-scale raw BIM log data and introduce a novel command recommendation model. Our model builds upon the state-of-the-art Transformer backbones originally developed for large language models (LLMs), incorporating a custom feature fusion module, dedicated loss function, and targeted learning strategy. In a case study, the proposed method is applied to over 32 billion rows of real-world log data collected globally from the BIM authoring software Vectorworks. Experimental results demonstrate that our method can learn universal and generalizable modeling patterns from anonymous user interaction sequences across different countries, disciplines, and projects. When generating recommendations for the next command, our approach achieves a Recall@10 of approximately 84%. The code is available at: https://github.com/dcy0577/BIM-Command-Recommendation.git
Related papers
- Beyond Model Base Selection: Weaving Knowledge to Master Fine-grained Neural Network Design [20.31388126105889]
We propose M-DESIGN, a curated model knowledge base (MKB) pipeline for mastering neural network refinement.<n>First, we propose a knowledge weaving engine that reframes model refinement as an adaptive query problem over task metadata.<n>Given a user's task query, M-DESIGN quickly matches and iteratively refines candidate models by leveraging a graph-relational knowledge schema.
arXiv Detail & Related papers (2025-07-21T07:49:19Z) - Large Language Model as Universal Retriever in Industrial-Scale Recommender System [27.58251380192748]
We show that Large Language Models (LLMs) can function as universal retrievers, capable of handling multiple objectives within a generative retrieval framework.<n>We also introduce matrix decomposition to boost model learnability, discriminability, and transferability.<n>Our Universal Retrieval Model (URM) can adaptively generate a set from computation of tens of millions of candidates.
arXiv Detail & Related papers (2025-02-05T09:56:52Z) - Instruction-Following Pruning for Large Language Models [58.329978053711024]
We move beyond the traditional static pruning approach of determining a fixed pruning mask for a model.<n>In our method, the pruning mask is input-dependent and adapts dynamically based on the information described in a user instruction.<n>Our approach, termed "instruction-following pruning", introduces a sparse mask predictor that takes the user instruction as input and dynamically selects the most relevant model parameters for the given task.
arXiv Detail & Related papers (2025-01-03T20:19:14Z) - Revisiting SMoE Language Models by Evaluating Inefficiencies with Task Specific Expert Pruning [78.72226641279863]
Sparse Mixture of Expert (SMoE) models have emerged as a scalable alternative to dense models in language modeling.
Our research explores task-specific model pruning to inform decisions about designing SMoE architectures.
We introduce an adaptive task-aware pruning technique UNCURL to reduce the number of experts per MoE layer in an offline manner post-training.
arXiv Detail & Related papers (2024-09-02T22:35:03Z) - Text2BIM: Generating Building Models Using a Large Language Model-based Multi-Agent Framework [0.3749861135832073]
The Text2 BIM framework generates 3D building models from natural language instructions.<n>A rule-based model checker is introduced into the agentic workflow to guide the LLM agents in resolving issues.<n>The framework can effectively generate high-quality, structurally rational building models.
arXiv Detail & Related papers (2024-08-15T09:48:45Z) - Towards commands recommender system in BIM authoring tool using transformers [0.7499722271664147]
This study explores the potential of sequential recommendation systems to accelerate the BIM modeling process.
By treating BIM software commands as recommendable items, we introduce a novel end-to-end approach that predicts the next-best command based on user historical interactions.
arXiv Detail & Related papers (2024-06-02T17:47:06Z) - Recommender AI Agent: Integrating Large Language Models for Interactive
Recommendations [53.76682562935373]
We introduce an efficient framework called textbfInteRecAgent, which employs LLMs as the brain and recommender models as tools.
InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs.
arXiv Detail & Related papers (2023-08-31T07:36:44Z) - Unlocking the Potential of User Feedback: Leveraging Large Language
Model as User Simulator to Enhance Dialogue System [65.93577256431125]
We propose an alternative approach called User-Guided Response Optimization (UGRO) to combine it with a smaller task-oriented dialogue model.
This approach uses LLM as annotation-free user simulator to assess dialogue responses, combining them with smaller fine-tuned end-to-end TOD models.
Our approach outperforms previous state-of-the-art (SOTA) results.
arXiv Detail & Related papers (2023-06-16T13:04:56Z) - Streamlined Framework for Agile Forecasting Model Development towards
Efficient Inventory Management [2.0625936401496237]
This paper proposes a framework for developing forecasting models by streamlining the connections between core components of the developmental process.
The proposed framework enables swift and robust integration of new datasets, experimentation on different algorithms, and selection of the best models.
arXiv Detail & Related papers (2023-04-13T08:52:32Z) - Lifelong Generative Modelling Using Dynamic Expansion Graph Model [15.350366047108103]
We study the forgetting behaviour of VAEs using a joint GR and ENA methodology.
We propose a novel Dynamic Expansion Graph Model (DEGM)
arXiv Detail & Related papers (2021-12-15T17:35:27Z) - Conservative Objective Models for Effective Offline Model-Based
Optimization [78.19085445065845]
Computational design problems arise in a number of settings, from synthetic biology to computer architectures.
We propose a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs.
COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems.
arXiv Detail & Related papers (2021-07-14T17:55:28Z) - User Memory Reasoning for Conversational Recommendation [68.34475157544246]
We study a conversational recommendation model which dynamically manages users' past (offline) preferences and current (online) requests.
MGConvRex captures human-level reasoning over user memory and has disjoint training/testing sets of users for zero-shot (cold-start) reasoning for recommendation.
arXiv Detail & Related papers (2020-05-30T05:29:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.