Are All Steps Equally Important? Benchmarking Essentiality Detection of
Events
- URL: http://arxiv.org/abs/2210.04074v3
- Date: Sat, 28 Oct 2023 06:37:48 GMT
- Title: Are All Steps Equally Important? Benchmarking Essentiality Detection of
Events
- Authors: Haoyu Wang, Hongming Zhang, Yueguan Wang, Yuqian Deng, Muhao Chen, Dan
Roth
- Abstract summary: This paper examines the extent to which current models comprehend the essentiality of step events in relation to a goal event.
We contribute a high-quality corpus of (goal, step) pairs gathered from the community guideline website WikiHow.
The high inter-annotator agreement demonstrates that humans possess a consistent understanding of event essentiality.
- Score: 92.92425231146433
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Natural language expresses events with varying granularities, where
coarse-grained events (goals) can be broken down into finer-grained event
sequences (steps). A critical yet overlooked aspect of understanding event
processes is recognizing that not all step events hold equal importance toward
the completion of a goal. In this paper, we address this gap by examining the
extent to which current models comprehend the essentiality of step events in
relation to a goal event. Cognitive studies suggest that such capability
enables machines to emulate human commonsense reasoning about preconditions and
necessary efforts of everyday tasks. We contribute a high-quality corpus of
(goal, step) pairs gathered from the community guideline website WikiHow, with
steps manually annotated for their essentiality concerning the goal by experts.
The high inter-annotator agreement demonstrates that humans possess a
consistent understanding of event essentiality. However, after evaluating
multiple statistical and largescale pre-trained language models, we find that
existing approaches considerably underperform compared to humans. This
observation highlights the need for further exploration into this critical and
challenging task. The dataset and code are available at
http://cogcomp.org/page/publication_view/1023.
Related papers
- SAGA: A Participant-specific Examination of Story Alternatives and Goal Applicability for a Deeper Understanding of Complex Events [13.894639630989563]
We argue that such knowledge can be elicited through a participant achievement lens.
We analyze a complex event in a narrative according to the intended achievements of the participants.
We show that smaller models fine-tuned on our dataset can achieve performance surpassing larger models.
arXiv Detail & Related papers (2024-08-11T14:52:40Z) - Double Mixture: Towards Continual Event Detection from Speech [60.33088725100812]
Speech event detection is crucial for multimedia retrieval, involving the tagging of both semantic and acoustic events.
This paper tackles two primary challenges in speech event detection: the continual integration of new events without forgetting previous ones, and the disentanglement of semantic from acoustic events.
We propose a novel method, 'Double Mixture,' which merges speech expertise with robust memory mechanisms to enhance adaptability and prevent forgetting.
arXiv Detail & Related papers (2024-04-20T06:32:00Z) - Improving Event Definition Following For Zero-Shot Event Detection [66.27883872707523]
Existing approaches on zero-shot event detection usually train models on datasets annotated with known event types.
We aim to improve zero-shot event detection by training models to better follow event definitions.
arXiv Detail & Related papers (2024-03-05T01:46:50Z) - An Ordinal Latent Variable Model of Conflict Intensity [59.49424978353101]
The Goldstein scale is a widely-used expert-based measure that scores events on a conflictual-cooperative scale.
This paper takes a latent variable-based approach to measuring conflict intensity.
arXiv Detail & Related papers (2022-10-08T08:59:17Z) - ESTER: A Machine Reading Comprehension Dataset for Event Semantic
Relation Reasoning [49.795767003586235]
We introduce ESTER, a comprehensive machine reading comprehension dataset for Event Semantic Relation Reasoning.
We study five most commonly used event semantic relations and formulate them as question answering tasks.
Experimental results show that the current SOTA systems achieve 60.5%, 57.8%, and 76.3% for event-based F1, token based F1 and HIT@1 scores respectively.
arXiv Detail & Related papers (2021-04-16T19:59:26Z) - Visual Goal-Step Inference using wikiHow [29.901908251322684]
Inferring the sub-sequence of steps of a goal can help artificial intelligence systems reason about human activities.
We propose the Visual Goal-Step Inference (VGSI) task where a model is given a textual goal and must choose a plausible step towards that goal from among four candidate images.
We show that the knowledge learned from our data can effectively transfer to other datasets like HowTo100M, increasing the multiple-choice accuracy by 15% to 20%.
arXiv Detail & Related papers (2021-04-12T22:20:09Z) - Human in Events: A Large-Scale Benchmark for Human-centric Video
Analysis in Complex Events [106.19047816743988]
We present a new large-scale dataset with comprehensive annotations, named Human-in-Events or HiEve.
It contains a record number of poses (>1M), the largest number of action instances (>56k) under complex events, as well as one of the largest numbers of trajectories lasting for longer time.
Based on its diverse annotation, we present two simple baselines for action recognition and pose estimation.
arXiv Detail & Related papers (2020-05-09T18:24:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.