Construction contract risk identification based on knowledge-augmented
language model
- URL: http://arxiv.org/abs/2309.12626v1
- Date: Fri, 22 Sep 2023 05:27:06 GMT
- Title: Construction contract risk identification based on knowledge-augmented
language model
- Authors: Saika Wong, Chunmo Zheng, Xing Su, Yinqiu Tang
- Abstract summary: This paper presents a novel approach that leverages large language models with construction contract knowledge to emulate the process of contract review by human experts.
The use of a natural language when building the domain knowledge base facilitates practical implementation.
- Score: 1.870031206586792
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contract review is an essential step in construction projects to prevent
potential losses. However, the current methods for reviewing construction
contracts lack effectiveness and reliability, leading to time-consuming and
error-prone processes. While large language models (LLMs) have shown promise in
revolutionizing natural language processing (NLP) tasks, they struggle with
domain-specific knowledge and addressing specialized issues. This paper
presents a novel approach that leverages LLMs with construction contract
knowledge to emulate the process of contract review by human experts. Our
tuning-free approach incorporates construction contract domain knowledge to
enhance language models for identifying construction contract risks. The use of
a natural language when building the domain knowledge base facilitates
practical implementation. We evaluated our method on real construction
contracts and achieved solid performance. Additionally, we investigated how
large language models employ logical thinking during the task and provide
insights and recommendations for future research.
Related papers
- Bridging Domain Knowledge and Process Discovery Using Large Language Models [0.0]
This paper leverages Large Language Models (LLMs) to integrate domain knowledge directly into process discovery.
We use rules derived from LLMs to guide model construction, ensuring alignment with both domain knowledge and actual process executions.
arXiv Detail & Related papers (2024-08-30T14:23:40Z) - Cognitive LLMs: Towards Integrating Cognitive Architectures and Large Language Models for Manufacturing Decision-making [51.737762570776006]
LLM-ACTR is a novel neuro-symbolic architecture that provides human-aligned and versatile decision-making.
Our framework extracts and embeds knowledge of ACT-R's internal decision-making process as latent neural representations.
Our experiments on novel Design for Manufacturing tasks show both improved task performance as well as improved grounded decision-making capability.
arXiv Detail & Related papers (2024-08-17T11:49:53Z) - Establishing Trustworthiness: Rethinking Tasks and Model Evaluation [36.329415036660535]
We argue that it is time to rethink what constitutes tasks and model evaluation in NLP.
We review existing compartmentalized approaches for understanding the origins of a model's functional capacity.
arXiv Detail & Related papers (2023-10-09T06:32:10Z) - A knowledge representation approach for construction contract knowledge
modeling [1.870031206586792]
The emergence of large language models (LLMs) presents an unprecedented opportunity to automate construction contract management.
LLMs may produce convincing yet inaccurate and misleading content due to a lack of domain expertise.
This paper introduces the Nested Contract Knowledge Graph (NCKG), a knowledge representation approach that captures the complexity of contract knowledge using a nested structure.
arXiv Detail & Related papers (2023-09-21T14:53:36Z) - Commonsense Knowledge Transfer for Pre-trained Language Models [83.01121484432801]
We introduce commonsense knowledge transfer, a framework to transfer the commonsense knowledge stored in a neural commonsense knowledge model to a general-purpose pre-trained language model.
It first exploits general texts to form queries for extracting commonsense knowledge from the neural commonsense knowledge model.
It then refines the language model with two self-supervised objectives: commonsense mask infilling and commonsense relation prediction.
arXiv Detail & Related papers (2023-06-04T15:44:51Z) - Knowledge Rumination for Pre-trained Language Models [77.55888291165462]
We propose a new paradigm dubbed Knowledge Rumination to help the pre-trained language model utilize related latent knowledge without retrieving it from the external corpus.
We apply the proposed knowledge rumination to various language models, including RoBERTa, DeBERTa, and GPT-3.
arXiv Detail & Related papers (2023-05-15T15:47:09Z) - UNTER: A Unified Knowledge Interface for Enhancing Pre-trained Language
Models [100.4659557650775]
We propose a UNified knowledge inTERface, UNTER, to provide a unified perspective to exploit both structured knowledge and unstructured knowledge.
With both forms of knowledge injected, UNTER gains continuous improvements on a series of knowledge-driven NLP tasks.
arXiv Detail & Related papers (2023-05-02T17:33:28Z) - Learning to Solve Voxel Building Embodied Tasks from Pixels and Natural
Language Instructions [53.21504989297547]
We propose a new method that combines a language model and reinforcement learning for the task of building objects in a Minecraft-like environment.
Our method first generates a set of consistently achievable sub-goals from the instructions and then completes associated sub-tasks with a pre-trained RL policy.
arXiv Detail & Related papers (2022-11-01T18:30:42Z) - LM-CORE: Language Models with Contextually Relevant External Knowledge [13.451001884972033]
We argue that storing large amounts of knowledge in the model parameters is sub-optimal given the ever-growing amounts of knowledge and resource requirements.
We present LM-CORE -- a general framework to achieve this -- that allows textitdecoupling of the language model training from the external knowledge source.
Experimental results show that LM-CORE, having access to external knowledge, achieves significant and robust outperformance over state-of-the-art knowledge-enhanced language models on knowledge probing tasks.
arXiv Detail & Related papers (2022-08-12T18:59:37Z) - ERICA: Improving Entity and Relation Understanding for Pre-trained
Language Models via Contrastive Learning [97.10875695679499]
We propose a novel contrastive learning framework named ERICA in pre-training phase to obtain a deeper understanding of the entities and their relations in text.
Experimental results demonstrate that our proposed ERICA framework achieves consistent improvements on several document-level language understanding tasks.
arXiv Detail & Related papers (2020-12-30T03:35:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.