ESTER: A Machine Reading Comprehension Dataset for Event Semantic
Relation Reasoning
- URL: http://arxiv.org/abs/2104.08350v1
- Date: Fri, 16 Apr 2021 19:59:26 GMT
- Title: ESTER: A Machine Reading Comprehension Dataset for Event Semantic
Relation Reasoning
- Authors: Rujun Han, I-Hung Hsu, Jiao Sun, Julia Baylon, Qiang Ning, Dan Roth,
Nanyun Pen
- Abstract summary: We introduce ESTER, a comprehensive machine reading comprehension dataset for Event Semantic Relation Reasoning.
We study five most commonly used event semantic relations and formulate them as question answering tasks.
Experimental results show that the current SOTA systems achieve 60.5%, 57.8%, and 76.3% for event-based F1, token based F1 and HIT@1 scores respectively.
- Score: 49.795767003586235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stories and narratives are composed based on a variety of events.
Understanding how these events are semantically related to each other is the
essence of reading comprehension. Recent event-centric reading comprehension
datasets focus on either event arguments or event temporal commonsense.
Although these tasks evaluate machines' ability of narrative understanding,
human like reading comprehension requires the capability to process event-based
semantics beyond arguments and temporal commonsense. For example, to understand
causality between events, we need to infer motivations or purposes; to
understand event hierarchy, we need to parse the composition of events. To
facilitate these tasks, we introduce ESTER, a comprehensive machine reading
comprehension (MRC) dataset for Event Semantic Relation Reasoning. We study
five most commonly used event semantic relations and formulate them as question
answering tasks. Experimental results show that the current SOTA systems
achieve 60.5%, 57.8%, and 76.3% for event-based F1, token based F1 and HIT@1
scores respectively, which are significantly below human performances.
Related papers
- EVIT: Event-Oriented Instruction Tuning for Event Reasoning [18.012724531672813]
Event reasoning aims to infer events according to certain relations and predict future events.
Large language models (LLMs) have made significant advancements in event reasoning owing to their wealth of knowledge and reasoning capabilities.
However, smaller instruction-tuned models currently in use do not consistently demonstrate exceptional proficiency in managing these tasks.
arXiv Detail & Related papers (2024-04-18T08:14:53Z) - Towards Event Extraction from Speech with Contextual Clues [61.164413398231254]
We introduce the Speech Event Extraction (SpeechEE) task and construct three synthetic training sets and one human-spoken test set.
Compared to event extraction from text, SpeechEE poses greater challenges mainly due to complex speech signals that are continuous and have no word boundaries.
Our method brings significant improvements on all datasets, achieving a maximum F1 gain of 10.7%.
arXiv Detail & Related papers (2024-01-27T11:07:19Z) - MAVEN-Arg: Completing the Puzzle of All-in-One Event Understanding Dataset with Event Argument Annotation [104.6065882758648]
MAVEN-Arg is the first all-in-one dataset supporting event detection, event argument extraction, and event relation extraction.
As an EAE benchmark, MAVEN-Arg offers three main advantages: (1) a comprehensive schema covering 162 event types and 612 argument roles, all with expert-written definitions and examples; (2) a large data scale, containing 98,591 events and 290,613 arguments obtained with laborious human annotation; and (3) the exhaustive annotation supporting all task variants of EAE.
arXiv Detail & Related papers (2023-11-15T16:52:14Z) - COMET-M: Reasoning about Multiple Events in Complex Sentences [14.644677930985816]
We propose COMET-M (Multi-Event), an event-centric commonsense model capable of generating commonsense inferences for a target event within a complex sentence.
COMET-M builds upon COMET, which excels at generating event-centric inferences for simple sentences, but struggles with the complexity of multi-event sentences prevalent in natural text.
arXiv Detail & Related papers (2023-05-24T01:35:01Z) - Rich Event Modeling for Script Event Prediction [60.67635412135682]
We propose the Rich Event Prediction (REP) framework for script event prediction.
REP contains an event extractor to extract such information from texts.
The core component of the predictor is a transformer-based event encoder to flexibly deal with an arbitrary number of arguments.
arXiv Detail & Related papers (2022-12-16T05:17:59Z) - Are All Steps Equally Important? Benchmarking Essentiality Detection of
Events [92.92425231146433]
This paper examines the extent to which current models comprehend the essentiality of step events in relation to a goal event.
We contribute a high-quality corpus of (goal, step) pairs gathered from the community guideline website WikiHow.
The high inter-annotator agreement demonstrates that humans possess a consistent understanding of event essentiality.
arXiv Detail & Related papers (2022-10-08T18:00:22Z) - CLIP-Event: Connecting Text and Images with Event Structures [123.31452120399827]
We propose a contrastive learning framework to enforce vision-language pretraining models.
We take advantage of text information extraction technologies to obtain event structural knowledge.
Experiments show that our zero-shot CLIP-Event outperforms the state-of-the-art supervised model in argument extraction.
arXiv Detail & Related papers (2022-01-13T17:03:57Z) - Document-level Event Extraction with Efficient End-to-end Learning of
Cross-event Dependencies [37.96254956540803]
We propose an end-to-end model leveraging Deep Value Networks (DVN), a structured prediction algorithm, to efficiently capture cross-event dependencies for document-level event extraction.
Our approach achieves comparable performance to CRF-based models on ACE05, while enjoys significantly higher computational efficiency.
arXiv Detail & Related papers (2020-10-24T05:28:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.