Enhancing Court View Generation with Knowledge Injection and Guidance
- URL: http://arxiv.org/abs/2403.04366v1
- Date: Thu, 7 Mar 2024 09:51:11 GMT
- Title: Enhancing Court View Generation with Knowledge Injection and Guidance
- Authors: Ang Li, Yiquan Wu, Yifei Liu, Fei Wu, Ming Cai, Kun Kuang
- Abstract summary: Court View Generation (CVG) aims to generate court views based on the plaintiff claims and the fact descriptions.
PLMs have showcased their prowess in natural language generation, but their application to the complex, knowledge-intensive domain of CVG often reveals inherent limitations.
We present a novel approach, named Knowledge Injection and Guidance (KIG), designed to bolster CVG using PLMs.
To efficiently incorporate domain knowledge during the training stage, we introduce a knowledge-injected prompt encoder for prompt tuning, thereby reducing computational overhead.
- Score: 43.32071790286732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Court View Generation (CVG) is a challenging task in the field of Legal
Artificial Intelligence (LegalAI), which aims to generate court views based on
the plaintiff claims and the fact descriptions. While Pretrained Language
Models (PLMs) have showcased their prowess in natural language generation,
their application to the complex, knowledge-intensive domain of CVG often
reveals inherent limitations. In this paper, we present a novel approach, named
Knowledge Injection and Guidance (KIG), designed to bolster CVG using PLMs. To
efficiently incorporate domain knowledge during the training stage, we
introduce a knowledge-injected prompt encoder for prompt tuning, thereby
reducing computational overhead. Moreover, to further enhance the model's
ability to utilize domain knowledge, we employ a generating navigator, which
dynamically guides the text generation process in the inference stage without
altering the model's architecture, making it readily transferable.
Comprehensive experiments on real-world data demonstrate the effectiveness of
our approach compared to several established baselines, especially in the
responsivity of claims, where it outperforms the best baseline by 11.87%.
Related papers
- A Multi-Source Heterogeneous Knowledge Injected Prompt Learning Method for Legal Charge Prediction [3.52209555388364]
We propose a prompt learning framework-based method for modeling case descriptions.
We leverage multi-source external knowledge from a legal knowledge base, a conversational LLM, and legal articles.
Our method achieves state-of-the-art results on CAIL-2018, the largest legal charge prediction dataset.
arXiv Detail & Related papers (2024-08-05T04:53:17Z) - Empowering Prior to Court Legal Analysis: A Transparent and Accessible Dataset for Defensive Statement Classification and Interpretation [5.646219481667151]
This paper introduces a novel dataset tailored for classification of statements made during police interviews, prior to court proceedings.
We introduce a fine-tuned DistilBERT model that achieves state-of-the-art performance in distinguishing truthful from deceptive statements.
We also present an XAI interface that empowers both legal professionals and non-specialists to interact with and benefit from our system.
arXiv Detail & Related papers (2024-05-17T11:22:27Z) - A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language Models [71.25225058845324]
Large Language Models (LLMs) have demonstrated revolutionary abilities in language understanding and generation.
Retrieval-Augmented Generation (RAG) can offer reliable and up-to-date external knowledge.
RA-LLMs have emerged to harness external and authoritative knowledge bases, rather than relying on the model's internal knowledge.
arXiv Detail & Related papers (2024-05-10T02:48:45Z) - ProgGen: Generating Named Entity Recognition Datasets Step-by-step with Self-Reflexive Large Language Models [25.68491572293656]
Large Language Models fall short in structured knowledge extraction tasks such as named entity recognition.
This paper explores an innovative, cost-efficient strategy to harness LLMs with modest NER capabilities for producing superior NER datasets.
arXiv Detail & Related papers (2024-03-17T06:12:43Z) - ChatGPT-HealthPrompt. Harnessing the Power of XAI in Prompt-Based
Healthcare Decision Support using ChatGPT [15.973406739758856]
This study presents an innovative approach to the application of large language models (LLMs) in clinical decision-making, focusing on OpenAI's ChatGPT.
Our approach introduces the use of contextual prompts-strategically designed to include task description, feature description, and crucially, integration of domain knowledge-for high-quality binary classification tasks even in data-scarce scenarios.
arXiv Detail & Related papers (2023-08-17T20:50:46Z) - KITLM: Domain-Specific Knowledge InTegration into Language Models for
Question Answering [30.129418454426844]
Large language models (LLMs) have demonstrated remarkable performance in a wide range of natural language tasks.
We propose, KITLM, a novel knowledge base integration approach into language model through relevant information infusion.
Our proposed knowledge-infused model surpasses the performance of both GPT-3.5-turbo and the state-of-the-art knowledge infusion method, SKILL, achieving over 1.5 times improvement in exact match scores on the MetaQA.
arXiv Detail & Related papers (2023-08-07T14:42:49Z) - Commonsense Knowledge Transfer for Pre-trained Language Models [83.01121484432801]
We introduce commonsense knowledge transfer, a framework to transfer the commonsense knowledge stored in a neural commonsense knowledge model to a general-purpose pre-trained language model.
It first exploits general texts to form queries for extracting commonsense knowledge from the neural commonsense knowledge model.
It then refines the language model with two self-supervised objectives: commonsense mask infilling and commonsense relation prediction.
arXiv Detail & Related papers (2023-06-04T15:44:51Z) - UNTER: A Unified Knowledge Interface for Enhancing Pre-trained Language
Models [100.4659557650775]
We propose a UNified knowledge inTERface, UNTER, to provide a unified perspective to exploit both structured knowledge and unstructured knowledge.
With both forms of knowledge injected, UNTER gains continuous improvements on a series of knowledge-driven NLP tasks.
arXiv Detail & Related papers (2023-05-02T17:33:28Z) - Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph
Construction [57.854498238624366]
We propose a retrieval-augmented approach, which retrieves schema-aware Reference As Prompt (RAP) for data-efficient knowledge graph construction.
RAP can dynamically leverage schema and knowledge inherited from human-annotated and weak-supervised data as a prompt for each sample.
arXiv Detail & Related papers (2022-10-19T16:40:28Z) - Visual-Language Navigation Pretraining via Prompt-based Environmental
Self-exploration [83.96729205383501]
We introduce prompt-based learning to achieve fast adaptation for language embeddings.
Our model can adapt to diverse vision-language navigation tasks, including VLN and REVERIE.
arXiv Detail & Related papers (2022-03-08T11:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.