Zero- and Few-Shot Event Detection via Prompt-Based Meta Learning
- URL: http://arxiv.org/abs/2305.17373v1
- Date: Sat, 27 May 2023 05:36:46 GMT
- Title: Zero- and Few-Shot Event Detection via Prompt-Based Meta Learning
- Authors: Zhenrui Yue, Huimin Zeng, Mengfei Lan, Heng Ji, Dong Wang
- Abstract summary: We propose MetaEvent, a meta learning-based framework for zero- and few-shot event detection.
In our framework, we propose to use the cloze-based prompt and a trigger-aware softr to efficiently project output to unseen event types.
As such, the proposed MetaEvent can perform zero-shot event detection by mapping features to event types without any prior knowledge.
- Score: 45.3385722995475
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With emerging online topics as a source for numerous new events, detecting
unseen / rare event types presents an elusive challenge for existing event
detection methods, where only limited data access is provided for training. To
address the data scarcity problem in event detection, we propose MetaEvent, a
meta learning-based framework for zero- and few-shot event detection.
Specifically, we sample training tasks from existing event types and perform
meta training to search for optimal parameters that quickly adapt to unseen
tasks. In our framework, we propose to use the cloze-based prompt and a
trigger-aware soft verbalizer to efficiently project output to unseen event
types. Moreover, we design a contrastive meta objective based on maximum mean
discrepancy (MMD) to learn class-separating features. As such, the proposed
MetaEvent can perform zero-shot event detection by mapping features to event
types without any prior knowledge. In our experiments, we demonstrate the
effectiveness of MetaEvent in both zero-shot and few-shot scenarios, where the
proposed method achieves state-of-the-art performance in extensive experiments
on benchmark datasets FewEvent and MAVEN.
Related papers
- Grounding Partially-Defined Events in Multimodal Data [61.0063273919745]
We introduce a multimodal formulation for partially-defined events and cast the extraction of these events as a three-stage span retrieval task.
We propose a benchmark for this task, MultiVENT-G, that consists of 14.5 hours of densely annotated current event videos and 1,168 text documents, containing 22.8K labeled event-centric entities.
Results illustrate the challenges that abstract event understanding poses and demonstrates promise in event-centric video-language systems.
arXiv Detail & Related papers (2024-10-07T17:59:48Z) - Continual Few-shot Event Detection via Hierarchical Augmentation Networks [21.574099641753055]
We introduce continual few-shot event detection (CFED), a more commonly encountered scenario when a substantial number of labeled samples are not accessible.
The CFED task is challenging as it involves memorizing previous event types and learning new event types with few-shot samples.
Our method significantly outperforms all of these methods in multiple continual few-shot event detection tasks.
arXiv Detail & Related papers (2024-03-26T14:20:42Z) - Improving Event Definition Following For Zero-Shot Event Detection [66.27883872707523]
Existing approaches on zero-shot event detection usually train models on datasets annotated with known event types.
We aim to improve zero-shot event detection by training models to better follow event definitions.
arXiv Detail & Related papers (2024-03-05T01:46:50Z) - PILED: An Identify-and-Localize Framework for Few-Shot Event Detection [79.66042333016478]
In our study, we employ cloze prompts to elicit event-related knowledge from pretrained language models.
We minimize the number of type-specific parameters, enabling our model to quickly adapt to event detection tasks for new types.
arXiv Detail & Related papers (2022-02-15T18:01:39Z) - Learning Constraints and Descriptive Segmentation for Subevent Detection [74.48201657623218]
We propose an approach to learning and enforcing constraints that capture dependencies between subevent detection and EventSeg prediction.
We adopt Rectifier Networks for constraint learning and then convert the learned constraints to a regularization term in the loss function of the neural model.
arXiv Detail & Related papers (2021-09-13T20:50:37Z) - Adaptive Knowledge-Enhanced Bayesian Meta-Learning for Few-shot Event
Detection [34.0901494858203]
Event detection (ED) aims at detecting event trigger words in sentences and classifying them into specific event types.
We propose a knowledge-based few-shot event detection method which uses a definition-based encoder to introduce external event knowledge.
Experiments show our method consistently and substantially outperforms a number of baselines by at least 15 absolute F1 points.
arXiv Detail & Related papers (2021-05-20T04:26:26Z) - Extensively Matching for Few-shot Learning Event Detection [66.31312496170139]
Event detection models under super-vised learning settings fail to transfer to new event types.
Few-shot learning has not beenexplored in event detection.
We propose two novelloss factors that matching examples in the sup-port set to provide more training signals to themodel.
arXiv Detail & Related papers (2020-06-17T18:30:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.