A knowledge representation approach for construction contract knowledge
modeling
- URL: http://arxiv.org/abs/2309.12132v1
- Date: Thu, 21 Sep 2023 14:53:36 GMT
- Title: A knowledge representation approach for construction contract knowledge
modeling
- Authors: Chunmo Zheng, Saika Wong, Xing Su, Yinqiu Tang
- Abstract summary: The emergence of large language models (LLMs) presents an unprecedented opportunity to automate construction contract management.
LLMs may produce convincing yet inaccurate and misleading content due to a lack of domain expertise.
This paper introduces the Nested Contract Knowledge Graph (NCKG), a knowledge representation approach that captures the complexity of contract knowledge using a nested structure.
- Score: 1.870031206586792
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The emergence of large language models (LLMs) presents an unprecedented
opportunity to automate construction contract management, reducing human errors
and saving significant time and costs. However, LLMs may produce convincing yet
inaccurate and misleading content due to a lack of domain expertise. To address
this issue, expert-driven contract knowledge can be represented in a structured
manner to constrain the automatic contract management process. This paper
introduces the Nested Contract Knowledge Graph (NCKG), a knowledge
representation approach that captures the complexity of contract knowledge
using a nested structure. It includes a nested knowledge representation
framework, a NCKG ontology built on the framework, and an implementation
method. Furthermore, we present the LLM-assisted contract review pipeline
enhanced with external knowledge in NCKG. Our pipeline achieves a promising
performance in contract risk reviewing, shedding light on the combination of
LLM and KG towards more reliable and interpretable contract management.
Related papers
- IAO Prompting: Making Knowledge Flow Explicit in LLMs through Structured Reasoning Templates [7.839338724237275]
We introduce IAO (Input-Action-Output), a structured template-based method that explicitly models how Large Language Models access and apply their knowledge.
IAO decomposes problems into sequential steps, each clearly identifying the input knowledge being used, the action being performed, and the resulting output.
Our findings provide insights into both knowledge representation within LLMs and methods for more reliable knowledge application.
arXiv Detail & Related papers (2025-02-05T11:14:20Z) - Ontology-grounded Automatic Knowledge Graph Construction by LLM under Wikidata schema [60.42231674887294]
We propose an ontology-grounded approach to Knowledge Graph (KG) construction using Large Language Models (LLMs) on a knowledge base.
We ground generation of KG with the authored ontology based on extracted relations to ensure consistency and interpretability.
Our work presents a promising direction for scalable KG construction pipeline with minimal human intervention, that yields high quality and human-interpretable KGs.
arXiv Detail & Related papers (2024-12-30T13:36:05Z) - KaLM: Knowledge-aligned Autoregressive Language Modeling via Dual-view Knowledge Graph Contrastive Learning [74.21524111840652]
This paper proposes textbfKaLM, a textitKnowledge-aligned Language Modeling approach.
It fine-tunes autoregressive large language models to align with KG knowledge via the joint objective of explicit knowledge alignment and implicit knowledge alignment.
Notably, our method achieves a significant performance boost in evaluations of knowledge-driven tasks.
arXiv Detail & Related papers (2024-12-06T11:08:24Z) - KG-RAG: Bridging the Gap Between Knowledge and Creativity [0.0]
Large Language Model Agents (LMAs) face issues such as information hallucinations, catastrophic forgetting, and limitations in processing long contexts.
This paper introduces a KG-RAG (Knowledge Graph-Retrieval Augmented Generation) pipeline to enhance the knowledge capabilities of LMAs.
Preliminary experiments on the ComplexWebQuestions dataset demonstrate notable improvements in the reduction of hallucinated content.
arXiv Detail & Related papers (2024-05-20T14:03:05Z) - Untangle the KNOT: Interweaving Conflicting Knowledge and Reasoning Skills in Large Language Models [51.72963030032491]
Knowledge documents for large language models (LLMs) may conflict with the memory of LLMs due to outdated or incorrect knowledge.
We construct a new dataset, dubbed KNOT, for knowledge conflict resolution examination in the form of question answering.
arXiv Detail & Related papers (2024-04-04T16:40:11Z) - From human experts to machines: An LLM supported approach to ontology
and knowledge graph construction [0.0]
Large Language Models (LLMs) have recently gained popularity for their ability to understand and generate human-like natural language.
This work explores the (semi-)automatic construction of KGs facilitated by open-source LLMs.
arXiv Detail & Related papers (2024-03-13T08:50:15Z) - A Knowledge-Injected Curriculum Pretraining Framework for Question Answering [70.13026036388794]
We propose a general Knowledge-Injected Curriculum Pretraining framework (KICP) to achieve comprehensive KG learning and exploitation for Knowledge-based question answering tasks.
The KI module first injects knowledge into the LM by generating KG-centered pretraining corpus, and generalizes the process into three key steps.
The KA module learns knowledge from the generated corpus with LM equipped with an adapter as well as keeps its original natural language understanding ability.
The CR module follows human reasoning patterns to construct three corpora with increasing difficulties of reasoning, and further trains the LM from easy to hard in a curriculum manner.
arXiv Detail & Related papers (2024-03-11T03:42:03Z) - Mitigating Large Language Model Hallucinations via Autonomous Knowledge
Graph-based Retrofitting [51.7049140329611]
This paper proposes Knowledge Graph-based Retrofitting (KGR) to mitigate factual hallucination during the reasoning process.
Experiments show that KGR can significantly improve the performance of LLMs on factual QA benchmarks.
arXiv Detail & Related papers (2023-11-22T11:08:38Z) - Construction contract risk identification based on knowledge-augmented
language model [1.870031206586792]
This paper presents a novel approach that leverages large language models with construction contract knowledge to emulate the process of contract review by human experts.
The use of a natural language when building the domain knowledge base facilitates practical implementation.
arXiv Detail & Related papers (2023-09-22T05:27:06Z) - UNTER: A Unified Knowledge Interface for Enhancing Pre-trained Language
Models [100.4659557650775]
We propose a UNified knowledge inTERface, UNTER, to provide a unified perspective to exploit both structured knowledge and unstructured knowledge.
With both forms of knowledge injected, UNTER gains continuous improvements on a series of knowledge-driven NLP tasks.
arXiv Detail & Related papers (2023-05-02T17:33:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.