Global Constraints with Prompting for Zero-Shot Event Argument
Classification
- URL: http://arxiv.org/abs/2302.04459v1
- Date: Thu, 9 Feb 2023 06:39:29 GMT
- Title: Global Constraints with Prompting for Zero-Shot Event Argument
Classification
- Authors: Zizheng Lin, Hongming Zhang and Yangqiu Song
- Abstract summary: We propose to use global constraints with prompting to tackle event argument classification without any annotation and task-specific training.
A pre-trained language model scores the new passages, making the initial prediction.
Our novel prompt templates can easily adapt to all events and argument types without manual effort.
- Score: 49.84347224233628
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Determining the role of event arguments is a crucial subtask of event
extraction. Most previous supervised models leverage costly annotations, which
is not practical for open-domain applications. In this work, we propose to use
global constraints with prompting to effectively tackles event argument
classification without any annotation and task-specific training. Specifically,
given an event and its associated passage, the model first creates several new
passages by prefix prompts and cloze prompts, where prefix prompts indicate
event type and trigger span, and cloze prompts connect each candidate role with
the target argument span. Then, a pre-trained language model scores the new
passages, making the initial prediction. Our novel prompt templates can easily
adapt to all events and argument types without manual effort. Next, the model
regularizes the prediction by global constraints exploiting cross-task,
cross-argument, and cross-event relations. Extensive experiments demonstrate
our model's effectiveness: it outperforms the best zero-shot baselines by 12.5%
and 10.9% F1 on ACE and ERE with given argument spans and by 4.3% and 3.3% F1,
respectively, without given argument spans. We have made our code publicly
available.
Related papers
- CorefPrompt: Prompt-based Event Coreference Resolution by Measuring
Event Type and Argument Compatibilities [16.888201607072318]
Event coreference resolution (ECR) aims to group event mentions referring to the same real-world event into clusters.
We propose a prompt-based approach, CorefPrompt, to transform ECR into a cloze-style (masked language model) task.
This allows for simultaneous event modeling and coreference discrimination within a single template, with a fully shared context.
arXiv Detail & Related papers (2023-10-23T02:47:27Z) - Enhancing Document-level Event Argument Extraction with Contextual Clues
and Role Relevance [12.239459451494872]
Document-level event argument extraction poses new challenges of long input and cross-sentence inference.
We propose a Span-trigger-based Contextual Pooling and latent Role Guidance model.
arXiv Detail & Related papers (2023-10-08T11:29:10Z) - Retrieval-Augmented Generative Question Answering for Event Argument
Extraction [66.24622127143044]
We propose a retrieval-augmented generative QA model (R-GQA) for event argument extraction.
It retrieves the most similar QA pair and augments it as prompt to the current example's context, then decodes the arguments as answers.
Our approach outperforms substantially prior methods across various settings.
arXiv Detail & Related papers (2022-11-14T02:00:32Z) - PILED: An Identify-and-Localize Framework for Few-Shot Event Detection [79.66042333016478]
In our study, we employ cloze prompts to elicit event-related knowledge from pretrained language models.
We minimize the number of type-specific parameters, enabling our model to quickly adapt to event detection tasks for new types.
arXiv Detail & Related papers (2022-02-15T18:01:39Z) - Query and Extract: Refining Event Extraction as Type-oriented Binary
Decoding [51.57864297948228]
We propose a novel event extraction framework that takes event types and argument roles as natural language queries.
Our framework benefits from the attention mechanisms to better capture the semantic correlation between the event types or argument roles and the input text.
arXiv Detail & Related papers (2021-10-14T15:49:40Z) - Generating Disentangled Arguments with Prompts: A Simple Event
Extraction Framework that Works [9.36117752274482]
Event Extraction bridges the gap between text and event signals.
We introduce the prompt-based learning strategy to the domain of Event Extraction.
In terms of F1 score on Argument Extraction, our simple architecture is stronger than any other generative counterpart.
arXiv Detail & Related papers (2021-10-09T09:36:08Z) - Document-Level Event Argument Extraction by Conditional Generation [75.73327502536938]
Event extraction has long been treated as a sentence-level task in the IE community.
We propose a document-level neural event argument extraction model by formulating the task as conditional generation following event templates.
We also compile a new document-level event extraction benchmark dataset WikiEvents.
arXiv Detail & Related papers (2021-04-13T03:36:38Z) - Unsupervised Label-aware Event Trigger and Argument Classification [73.86358632937372]
We propose an unsupervised event extraction pipeline, which first identifies events with available tools (e.g., SRL) and then automatically maps them to pre-defined event types.
We leverage pre-trained language models to contextually represent pre-defined types for both event triggers and arguments.
We successfully map 83% of the triggers and 54% of the arguments to the correct types, almost doubling the performance of previous zero-shot approaches.
arXiv Detail & Related papers (2020-12-30T17:47:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.