Continual Few-shot Event Detection via Hierarchical Augmentation Networks
- URL: http://arxiv.org/abs/2403.17733v1
- Date: Tue, 26 Mar 2024 14:20:42 GMT
- Title: Continual Few-shot Event Detection via Hierarchical Augmentation Networks
- Authors: Chenlong Zhang, Pengfei Cao, Yubo Chen, Kang Liu, Zhiqiang Zhang, Mengshu Sun, Jun Zhao,
- Abstract summary: We introduce continual few-shot event detection (CFED), a more commonly encountered scenario when a substantial number of labeled samples are not accessible.
The CFED task is challenging as it involves memorizing previous event types and learning new event types with few-shot samples.
Our method significantly outperforms all of these methods in multiple continual few-shot event detection tasks.
- Score: 21.574099641753055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional continual event detection relies on abundant labeled data for training, which is often impractical to obtain in real-world applications. In this paper, we introduce continual few-shot event detection (CFED), a more commonly encountered scenario when a substantial number of labeled samples are not accessible. The CFED task is challenging as it involves memorizing previous event types and learning new event types with few-shot samples. To mitigate these challenges, we propose a memory-based framework: Hierarchical Augmentation Networks (HANet). To memorize previous event types with limited memory, we incorporate prototypical augmentation into the memory set. For the issue of learning new event types in few-shot scenarios, we propose a contrastive augmentation module for token representations. Despite comparing with previous state-of-the-art methods, we also conduct comparisons with ChatGPT. Experiment results demonstrate that our method significantly outperforms all of these methods in multiple continual few-shot event detection tasks.
Related papers
- Double Mixture: Towards Continual Event Detection from Speech [60.33088725100812]
Speech event detection is crucial for multimedia retrieval, involving the tagging of both semantic and acoustic events.
This paper tackles two primary challenges in speech event detection: the continual integration of new events without forgetting previous ones, and the disentanglement of semantic from acoustic events.
We propose a novel method, 'Double Mixture,' which merges speech expertise with robust memory mechanisms to enhance adaptability and prevent forgetting.
arXiv Detail & Related papers (2024-04-20T06:32:00Z) - Improving Event Definition Following For Zero-Shot Event Detection [66.27883872707523]
Existing approaches on zero-shot event detection usually train models on datasets annotated with known event types.
We aim to improve zero-shot event detection by training models to better follow event definitions.
arXiv Detail & Related papers (2024-03-05T01:46:50Z) - Zero- and Few-Shot Event Detection via Prompt-Based Meta Learning [45.3385722995475]
We propose MetaEvent, a meta learning-based framework for zero- and few-shot event detection.
In our framework, we propose to use the cloze-based prompt and a trigger-aware softr to efficiently project output to unseen event types.
As such, the proposed MetaEvent can perform zero-shot event detection by mapping features to event types without any prior knowledge.
arXiv Detail & Related papers (2023-05-27T05:36:46Z) - Few-shot Incremental Event Detection [3.508346077709686]
Event detection tasks can enable the quick detection of events from texts.
To extend them to detect a new class without losing the ability to detect old classes requires costly retraining of the model from scratch.
We define a new task, few-shot incremental event detection, which focuses on learning to detect a new event class with limited data.
arXiv Detail & Related papers (2022-09-05T14:21:26Z) - Unifying Event Detection and Captioning as Sequence Generation via
Pre-Training [53.613265415703815]
We propose a unified pre-training and fine-tuning framework to enhance the inter-task association between event detection and captioning.
Our model outperforms the state-of-the-art methods, and can be further boosted when pre-trained on extra large-scale video-text data.
arXiv Detail & Related papers (2022-07-18T14:18:13Z) - Incremental Prompting: Episodic Memory Prompt for Lifelong Event
Detection [41.74511506187945]
Lifelong event detection aims to incrementally update a model with new event types and data.
One critical challenge is that the model would catastrophically forget old types when continually trained on new data.
We introduce Episodic Memory Prompts (EMP) to explicitly preserve the learned task-specific knowledge.
arXiv Detail & Related papers (2022-04-15T00:21:31Z) - PILED: An Identify-and-Localize Framework for Few-Shot Event Detection [79.66042333016478]
In our study, we employ cloze prompts to elicit event-related knowledge from pretrained language models.
We minimize the number of type-specific parameters, enabling our model to quickly adapt to event detection tasks for new types.
arXiv Detail & Related papers (2022-02-15T18:01:39Z) - Extensively Matching for Few-shot Learning Event Detection [66.31312496170139]
Event detection models under super-vised learning settings fail to transfer to new event types.
Few-shot learning has not beenexplored in event detection.
We propose two novelloss factors that matching examples in the sup-port set to provide more training signals to themodel.
arXiv Detail & Related papers (2020-06-17T18:30:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.