ULTRA: Unleash LLMs' Potential for Event Argument Extraction through
Hierarchical Modeling and Pair-wise Refinement
- URL: http://arxiv.org/abs/2401.13218v1
- Date: Wed, 24 Jan 2024 04:13:28 GMT
- Title: ULTRA: Unleash LLMs' Potential for Event Argument Extraction through
Hierarchical Modeling and Pair-wise Refinement
- Authors: Xinliang Frederick Zhang, Carter Blum, Temma Choji, Shalin Shah,
Alakananda Vempala
- Abstract summary: Event argument extraction (EAE) is the task of identifying role-specific text spans (i.e., arguments) for a given event.
We propose a hierarchical framework that extracts event arguments more cost-effectively.
- Score: 6.39480325103865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Structural extraction of events within discourse is critical since it avails
a deeper understanding of communication patterns and behavior trends. Event
argument extraction (EAE), at the core of event-centric understanding, is the
task of identifying role-specific text spans (i.e., arguments) for a given
event. Document-level EAE (DocEAE) focuses on arguments that are scattered
across an entire document. In this work, we explore the capabilities of open
source Large Language Models (LLMs), i.e., Flan-UL2, for the DocEAE task. To
this end, we propose ULTRA, a hierarchical framework that extracts event
arguments more cost-effectively -- the method needs as few as 50 annotations
and doesn't require hitting costly API endpoints. Further, it alleviates the
positional bias issue intrinsic to LLMs. ULTRA first sequentially reads text
chunks of a document to generate a candidate argument set, upon which ULTRA
learns to drop non-pertinent candidates through self-refinement. We further
introduce LEAFER to address the challenge LLMs face in locating the exact
boundary of an argument span. ULTRA outperforms strong baselines, which include
strong supervised models and ChatGPT, by 9.8% when evaluated by the exact match
(EM) metric.
Related papers
- One Small and One Large for Document-level Event Argument Extraction [13.25071868664492]
Document-level Event Argument Extraction (EAE) faces two challenges due to increased input length.
Co and Structure Event Argument Extraction model (CsEAE) based on Small Language Models (SLMs)
Second method introduces new prompts to transform the extraction task into a generative task suitable for Large Language Models (LLMs)
arXiv Detail & Related papers (2024-11-08T14:44:01Z) - Graph-DPEP: Decomposed Plug and Ensemble Play for Few-Shot Document Relation Extraction with Graph-of-Thoughts Reasoning [34.85741925091139]
Graph-DPEP framework is grounded in the reasoning behind triplet explanation thoughts presented in natural language.
We develop "ensemble-play", reapplying generation on the entire type list by leveraging the reasoning thoughts embedded in a sub-graph.
arXiv Detail & Related papers (2024-11-05T07:12:36Z) - PromptReps: Prompting Large Language Models to Generate Dense and Sparse Representations for Zero-Shot Document Retrieval [76.50690734636477]
We propose PromptReps, which combines the advantages of both categories: no need for training and the ability to retrieve from the whole corpus.
The retrieval system harnesses both dense text embedding and sparse bag-of-words representations.
arXiv Detail & Related papers (2024-04-29T04:51:30Z) - MAVEN-Arg: Completing the Puzzle of All-in-One Event Understanding Dataset with Event Argument Annotation [104.6065882758648]
MAVEN-Arg is the first all-in-one dataset supporting event detection, event argument extraction, and event relation extraction.
As an EAE benchmark, MAVEN-Arg offers three main advantages: (1) a comprehensive schema covering 162 event types and 612 argument roles, all with expert-written definitions and examples; (2) a large data scale, containing 98,591 events and 290,613 arguments obtained with laborious human annotation; and (3) the exhaustive annotation supporting all task variants of EAE.
arXiv Detail & Related papers (2023-11-15T16:52:14Z) - Enhancing Document-level Event Argument Extraction with Contextual Clues
and Role Relevance [12.239459451494872]
Document-level event argument extraction poses new challenges of long input and cross-sentence inference.
We propose a Span-trigger-based Contextual Pooling and latent Role Guidance model.
arXiv Detail & Related papers (2023-10-08T11:29:10Z) - PEARL: Prompting Large Language Models to Plan and Execute Actions Over
Long Documents [78.27865456183397]
We propose PEARL, a prompting framework to improve reasoning over long documents.
Each stage of PEARL is implemented via zero-shot or few-shot prompting with minimal human input.
We evaluate PEARL on a challenging subset of the QuALITY dataset, which contains questions that require complex reasoning over long narrative texts.
arXiv Detail & Related papers (2023-05-23T23:06:04Z) - Cue-CoT: Chain-of-thought Prompting for Responding to In-depth Dialogue
Questions with LLMs [59.74002011562726]
We propose a novel linguistic cue-based chain-of-thoughts (textitCue-CoT) to provide a more personalized and engaging response.
We build a benchmark with in-depth dialogue questions, consisting of 6 datasets in both Chinese and English.
Empirical results demonstrate our proposed textitCue-CoT method outperforms standard prompting methods in terms of both textithelpfulness and textitacceptability on all datasets.
arXiv Detail & Related papers (2023-05-19T16:27:43Z) - Guiding Large Language Models via Directional Stimulus Prompting [114.84930073977672]
We introduce Directional Stimulus Prompting, a novel framework for guiding black-box large language models (LLMs) toward specific desired outputs.
Instead of directly adjusting LLMs, our method employs a small tunable policy model to generate an auxiliary directional stimulus prompt for each input instance.
arXiv Detail & Related papers (2023-02-22T17:44:15Z) - Reinforcement Learning-based Dialogue Guided Event Extraction to Exploit
Argument Relations [70.35379323231241]
This paper presents a better approach for event extraction by explicitly utilizing the relationships of event arguments.
We employ reinforcement learning and incremental learning to extract multiple arguments via a multi-turned, iterative process.
Experimental results show that our approach consistently outperforms seven state-of-the-art event extraction methods.
arXiv Detail & Related papers (2021-06-23T13:24:39Z) - Document-Level Event Role Filler Extraction using Multi-Granularity
Contextualized Encoding [40.13163091122463]
Event extraction is a difficult task since it requires a view of a larger context to determine which spans of text correspond to event role fillers.
We first investigate how end-to-end neural sequence models perform on document-level role filler extraction.
We show that our best system performs substantially better than prior work.
arXiv Detail & Related papers (2020-05-13T20:42:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.