What happens before and after: Multi-Event Commonsense in Event
Coreference Resolution
- URL: http://arxiv.org/abs/2302.09715v2
- Date: Tue, 21 Feb 2023 22:44:34 GMT
- Title: What happens before and after: Multi-Event Commonsense in Event
Coreference Resolution
- Authors: Sahithya Ravi, Chris Tanner, Raymond Ng, Vered Shwartz
- Abstract summary: We propose a model that extends event mentions with temporal commonsense inferences.
We show that incorporating such inferences into an existing event coreference model improves its performance.
- Score: 9.57377689986741
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Event coreference models cluster event mentions pertaining to the same
real-world event. Recent models rely on contextualized representations to
recognize coreference among lexically or contextually similar mentions.
However, models typically fail to leverage commonsense inferences, which is
particularly limiting for resolving lexically-divergent mentions. We propose a
model that extends event mentions with temporal commonsense inferences. Given a
complex sentence with multiple events, e.g., "The man killed his wife and got
arrested", with the target event "arrested", our model generates plausible
events that happen before the target event - such as "the police arrived", and
after it, such as "he was sentenced". We show that incorporating such
inferences into an existing event coreference model improves its performance,
and we analyze the coreferences in which such temporal knowledge is required.
Related papers
- Improving Event Definition Following For Zero-Shot Event Detection [66.27883872707523]
Existing approaches on zero-shot event detection usually train models on datasets annotated with known event types.
We aim to improve zero-shot event detection by training models to better follow event definitions.
arXiv Detail & Related papers (2024-03-05T01:46:50Z) - Continual Event Extraction with Semantic Confusion Rectification [50.59450741139265]
We study continual event extraction, which aims to extract incessantly emerging event information while avoiding forgetting.
We observe that the semantic confusion on event types stems from the annotations of the same text being updated over time.
This paper proposes a novel continual event extraction model with semantic confusion rectification.
arXiv Detail & Related papers (2023-10-24T02:48:50Z) - A Generative Approach for Script Event Prediction via Contrastive
Fine-tuning [35.87615178251874]
Script event prediction aims to predict the subsequent event given the context.
Recent works have attempted to improve event correlation reasoning by using pretrained language models and incorporating external knowledge.
We propose a novel generative approach for this task, in which a pretrained language model is fine-tuned with an event-centric pretraining objective.
arXiv Detail & Related papers (2022-12-07T07:32:47Z) - Efficient Zero-shot Event Extraction with Context-Definition Alignment [50.15061819297237]
Event extraction (EE) is the task of identifying interested event mentions from text.
We argue that using the static embedding of the event type name might not be enough because a single word could be ambiguous.
We name our approach Zero-shot Event extraction with Definition (ZED)
arXiv Detail & Related papers (2022-11-09T19:06:22Z) - Unifying Event Detection and Captioning as Sequence Generation via
Pre-Training [53.613265415703815]
We propose a unified pre-training and fine-tuning framework to enhance the inter-task association between event detection and captioning.
Our model outperforms the state-of-the-art methods, and can be further boosted when pre-trained on extra large-scale video-text data.
arXiv Detail & Related papers (2022-07-18T14:18:13Z) - EA$^2$E: Improving Consistency with Event Awareness for Document-Level
Argument Extraction [52.43978926985928]
We introduce the Event-Aware Argument Extraction (EA$2$E) model with augmented context for training and inference.
Experiment results on WIKIEVENTS and ACE2005 datasets demonstrate the effectiveness of EA$2$E.
arXiv Detail & Related papers (2022-05-30T04:33:51Z) - Event Extraction as Natural Language Generation [42.081626647997616]
Event extraction is usually formulated as a classification or structured prediction problem.
We propose GenEE, a model that not only captures complex dependencies within an event but also generalizes well to unseen or rare event types.
Empirical results show that our model achieves strong performance on event extraction tasks under all zero-shot, few-shot, and high-resource scenarios.
arXiv Detail & Related papers (2021-08-29T00:27:31Z) - Conditional Generation of Temporally-ordered Event Sequences [29.44608199294757]
We present a conditional generation model capable of capturing event cooccurrence as well as temporality of event sequences.
This single model can address both temporal ordering, sorting a given sequence of events into the order they occurred, and event infilling, predicting new events which fit into a temporally-ordered sequence of existing ones.
arXiv Detail & Related papers (2020-12-31T18:10:18Z) - Unsupervised Label-aware Event Trigger and Argument Classification [73.86358632937372]
We propose an unsupervised event extraction pipeline, which first identifies events with available tools (e.g., SRL) and then automatically maps them to pre-defined event types.
We leverage pre-trained language models to contextually represent pre-defined types for both event triggers and arguments.
We successfully map 83% of the triggers and 54% of the arguments to the correct types, almost doubling the performance of previous zero-shot approaches.
arXiv Detail & Related papers (2020-12-30T17:47:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.