LLM-FuncMapper: Function Identification for Interpreting Complex Clauses
in Building Codes via LLM
- URL: http://arxiv.org/abs/2308.08728v1
- Date: Thu, 17 Aug 2023 01:58:04 GMT
- Title: LLM-FuncMapper: Function Identification for Interpreting Complex Clauses
in Building Codes via LLM
- Authors: Zhe Zheng, Ke-Yin Chen, Xin-Yu Cao, Xin-Zheng Lu, Jia-Rui Lin
- Abstract summary: LLM-FuncMapper is an approach to identifying predefined functions needed to interpret various regulatory clauses.
Almost 100% of computer-processible clauses can be interpreted and represented as computer-executable codes.
This study is the first attempt to introduce LLM for understanding and interpreting complex regulatory clauses.
- Score: 3.802984168589694
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As a vital stage of automated rule checking (ARC), rule interpretation of
regulatory texts requires considerable effort. However, interpreting regulatory
clauses with implicit properties or complex computational logic is still
challenging due to the lack of domain knowledge and limited expressibility of
conventional logic representations. Thus, LLM-FuncMapper, an approach to
identifying predefined functions needed to interpret various regulatory clauses
based on the large language model (LLM), is proposed. First, by systematically
analysis of building codes, a series of atomic functions are defined to capture
shared computational logics of implicit properties and complex constraints,
creating a database of common blocks for interpreting regulatory clauses. Then,
a prompt template with the chain of thought is developed and further enhanced
with a classification-based tuning strategy, to enable common LLMs for
effective function identification. Finally, the proposed approach is validated
with statistical analysis, experiments, and proof of concept. Statistical
analysis reveals a long-tail distribution and high expressibility of the
developed function database, with which almost 100% of computer-processible
clauses can be interpreted and represented as computer-executable codes.
Experiments show that LLM-FuncMapper achieve promising results in identifying
relevant predefined functions for rule interpretation. Further proof of concept
in automated rule interpretation also demonstrates the possibility of
LLM-FuncMapper in interpreting complex regulatory clauses. To the best of our
knowledge, this study is the first attempt to introduce LLM for understanding
and interpreting complex regulatory clauses, which may shed light on further
adoption of LLM in the construction domain.
Related papers
- RuAG: Learned-rule-augmented Generation for Large Language Models [62.64389390179651]
We propose a novel framework, RuAG, to automatically distill large volumes of offline data into interpretable first-order logic rules.
We evaluate our framework on public and private industrial tasks, including natural language processing, time-series, decision-making, and industrial tasks.
arXiv Detail & Related papers (2024-11-04T00:01:34Z) - Reversal of Thought: Enhancing Large Language Models with Preference-Guided Reverse Reasoning Warm-up [9.42385235462794]
Large language models (LLMs) have shown remarkable performance in reasoning tasks but face limitations in mathematical and complex logical reasoning.
We propose Reversal of Thought (RoT), a novel framework aimed at enhancing the logical reasoning abilities of LLMs.
RoT utilizes a Preference-Guided Reverse Reasoning warm-up strategy, which integrates logical symbols for pseudocode planning.
arXiv Detail & Related papers (2024-10-16T07:44:28Z) - Aligning with Logic: Measuring, Evaluating and Improving Logical Consistency in Large Language Models [31.558429029429863]
We study logical consistency of Large Language Models (LLMs) as a prerequisite for more reliable and trustworthy systems.
We first propose a universal framework to quantify the logical consistency via three fundamental proxies: transitivity, commutativity and negation invariance.
We then evaluate logical consistency, using the defined measures, of a wide range of LLMs, demonstrating that it can serve as a strong proxy for overall robustness.
arXiv Detail & Related papers (2024-10-03T04:34:04Z) - Proof of Thought : Neurosymbolic Program Synthesis allows Robust and Interpretable Reasoning [1.3003982724617653]
Large Language Models (LLMs) have revolutionized natural language processing, yet they struggle with inconsistent reasoning.
This research introduces Proof of Thought, a framework that enhances the reliability and transparency of LLM outputs.
Key contributions include a robust type system with sort management for enhanced logical integrity, explicit representation of rules for clear distinction between factual and inferential knowledge.
arXiv Detail & Related papers (2024-09-25T18:35:45Z) - Inductive Learning of Logical Theories with LLMs: A Complexity-graded Analysis [9.865771016218549]
This work presents a novel systematic methodology to analyse the capabilities and limitations of Large Language Models (LLMs)
The analysis is complexity-graded w.r.t. rule dependency structure, allowing quantification of specific inference challenges on LLM performance.
arXiv Detail & Related papers (2024-08-15T16:41:00Z) - On the Design and Analysis of LLM-Based Algorithms [74.7126776018275]
Large language models (LLMs) are used as sub-routines in algorithms.
LLMs have achieved remarkable empirical success.
Our proposed framework holds promise for advancing LLM-based algorithms.
arXiv Detail & Related papers (2024-07-20T07:39:07Z) - Guiding LLM Temporal Logic Generation with Explicit Separation of Data and Control [0.7580487359358722]
Temporal logics are powerful tools that are widely used for the synthesis and verification of reactive systems.
Recent progress on Large Language Models has the potential to make the process of writing such specifications more accessible.
arXiv Detail & Related papers (2024-06-11T16:07:24Z) - Potential and Limitations of LLMs in Capturing Structured Semantics: A Case Study on SRL [78.80673954827773]
Large Language Models (LLMs) play a crucial role in capturing structured semantics to enhance language understanding, improve interpretability, and reduce bias.
We propose using Semantic Role Labeling (SRL) as a fundamental task to explore LLMs' ability to extract structured semantics.
We find interesting potential: LLMs can indeed capture semantic structures, and scaling-up doesn't always mirror potential.
We are surprised to discover that significant overlap in the errors is made by both LLMs and untrained humans, accounting for almost 30% of all errors.
arXiv Detail & Related papers (2024-05-10T11:44:05Z) - Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs [87.34281749422756]
Large language models (LLMs) have achieved impressive human-like performance across various reasoning tasks.
However, their mastery of underlying inferential rules still falls short of human capabilities.
We propose a logic scaffolding inferential rule generation framework, to construct an inferential rule base, ULogic.
arXiv Detail & Related papers (2024-02-18T03:38:51Z) - If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code
Empowers Large Language Models to Serve as Intelligent Agents [81.60906807941188]
Large language models (LLMs) are trained on a combination of natural language and formal language (code)
Code translates high-level goals into executable steps, featuring standard syntax, logical consistency, abstraction, and modularity.
arXiv Detail & Related papers (2024-01-01T16:51:20Z) - Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs [95.07757789781213]
Two lines of approaches are adopted for complex reasoning with LLMs.
One line of work prompts LLMs with various reasoning structures, while the structural outputs can be naturally regarded as intermediate reasoning steps.
The other line of work adopt LLM-free declarative solvers to do the reasoning task, rendering higher reasoning accuracy but lacking interpretability due to the black-box nature of the solvers.
We present a simple extension to the latter line of work. Specifically, we showcase that the intermediate search logs generated by Prolog interpreters can be accessed and interpreted into human-readable reasoning.
arXiv Detail & Related papers (2023-11-16T11:26:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.