Document-Level Event Extraction with Definition-Driven ICL
- URL: http://arxiv.org/abs/2408.05566v1
- Date: Sat, 10 Aug 2024 14:24:09 GMT
- Title: Document-Level Event Extraction with Definition-Driven ICL
- Authors: Zhuoyuan Liu, Yilin Luo,
- Abstract summary: We propose an optimization strategy called "Definition-driven Document-level Event Extraction (DDEE)"
By adjusting the length of the prompt and enhancing the clarity of prompts, we have significantly improved the event extraction performance of Large Language Models (LLMs)
In addition, the introduction of structured methods and strict limiting conditions has improved the precision of event and argument role extraction.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the field of Natural Language Processing (NLP), Large Language Models (LLMs) have shown great potential in document-level event extraction tasks, but existing methods face challenges in the design of prompts. To address this issue, we propose an optimization strategy called "Definition-driven Document-level Event Extraction (DDEE)." By adjusting the length of the prompt and enhancing the clarity of heuristics, we have significantly improved the event extraction performance of LLMs. We used data balancing techniques to solve the long-tail effect problem, enhancing the model's generalization ability for event types. At the same time, we refined the prompt to ensure it is both concise and comprehensive, adapting to the sensitivity of LLMs to the style of prompts. In addition, the introduction of structured heuristic methods and strict limiting conditions has improved the precision of event and argument role extraction. These strategies not only solve the prompt engineering problems of LLMs in document-level event extraction but also promote the development of event extraction technology, providing new research perspectives for other tasks in the NLP field.
Related papers
- Enhancing Document-level Argument Extraction with Definition-augmented Heuristic-driven Prompting for LLMs [0.0]
Event Argument Extraction (EAE) is pivotal for extracting structured information from unstructured text.
We propose a novel Definition-augmented Heuristic-driven Prompting (DHP) method to enhance the performance of Large Language Models (LLMs) in document-level EAE.
arXiv Detail & Related papers (2024-08-30T19:03:14Z) - Strategic Optimization and Challenges of Large Language Models in Object-Oriented Programming [0.0]
This research focuses on method-level code generation within the Object-Oriented Programming (OOP) framework.
We devised experiments that varied the extent of contextual information in the prompts.
Our findings indicate that prompts enriched with method invocation details yield the highest cost-effectiveness.
arXiv Detail & Related papers (2024-08-27T07:44:16Z) - Directed Exploration in Reinforcement Learning from Linear Temporal Logic [59.707408697394534]
Linear temporal logic (LTL) is a powerful language for task specification in reinforcement learning.
We show that the synthesized reward signal remains fundamentally sparse, making exploration challenging.
We show how better exploration can be achieved by further leveraging the specification and casting its corresponding Limit Deterministic B"uchi Automaton (LDBA) as a Markov reward process.
arXiv Detail & Related papers (2024-08-18T14:25:44Z) - Decompose, Enrich, and Extract! Schema-aware Event Extraction using LLMs [45.83950260830323]
This work focuses on harnessing Large Language Models for automated Event Extraction.
It introduces a new method to address hallucination by decomposing the task into Event Detection and Event Argument Extraction.
arXiv Detail & Related papers (2024-06-03T06:55:10Z) - Efficient Prompting Methods for Large Language Models: A Survey [50.171011917404485]
Prompting has become a mainstream paradigm for adapting large language models (LLMs) to specific natural language processing tasks.
This approach brings the additional computational burden of model inference and human effort to guide and control the behavior of LLMs.
We present the basic concepts of prompting, review the advances for efficient prompting, and highlight future research directions.
arXiv Detail & Related papers (2024-04-01T12:19:08Z) - LLM-DA: Data Augmentation via Large Language Models for Few-Shot Named
Entity Recognition [67.96794382040547]
$LLM-DA$ is a novel data augmentation technique based on large language models (LLMs) for the few-shot NER task.
Our approach involves employing 14 contextual rewriting strategies, designing entity replacements of the same type, and incorporating noise injection to enhance robustness.
arXiv Detail & Related papers (2024-02-22T14:19:56Z) - PhaseEvo: Towards Unified In-Context Prompt Optimization for Large
Language Models [9.362082187605356]
We present PhaseEvo, an efficient automatic prompt optimization framework that combines the generative capability of LLMs with the global search proficiency of evolution algorithms.
PhaseEvo significantly outperforms the state-of-the-art baseline methods by a large margin whilst maintaining good efficiency.
arXiv Detail & Related papers (2024-02-17T17:47:10Z) - LLMs Learn Task Heuristics from Demonstrations: A Heuristic-Driven Prompting Strategy for Document-Level Event Argument Extraction [12.673710691468264]
We introduce the Heuristic-Driven Link-of- Analogy (HD-LoA) prompting to address the challenge of example selection.
Inspired by the analogical reasoning of human, we propose the link-of-analogy prompting, which enables LLMs to process new situations.
Experiments show that our method outperforms existing prompting methods and few-shot supervised learning methods on document-level EAE datasets.
arXiv Detail & Related papers (2023-11-11T12:05:01Z) - OverPrompt: Enhancing ChatGPT through Efficient In-Context Learning [49.38867353135258]
We propose OverPrompt, leveraging the in-context learning capability of LLMs to handle multiple task inputs.
Our experiments show that OverPrompt can achieve cost-efficient zero-shot classification without causing significant detriment to task performance.
arXiv Detail & Related papers (2023-05-24T10:08:04Z) - Boosting Event Extraction with Denoised Structure-to-Text Augmentation [52.21703002404442]
Event extraction aims to recognize pre-defined event triggers and arguments from texts.
Recent data augmentation methods often neglect the problem of grammatical incorrectness.
We propose a denoised structure-to-text augmentation framework for event extraction DAEE.
arXiv Detail & Related papers (2023-05-16T16:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.