POQue: Asking Participant-specific Outcome Questions for a Deeper
Understanding of Complex Events
- URL: http://arxiv.org/abs/2212.02629v1
- Date: Mon, 5 Dec 2022 22:23:27 GMT
- Title: POQue: Asking Participant-specific Outcome Questions for a Deeper
Understanding of Complex Events
- Authors: Sai Vallurupalli, Sayontan Ghosh, Katrin Erk, Niranjan
Balasubramanian, Francis Ferraro
- Abstract summary: We show that crowd workers are able to infer the collective impact of salient events that make up the situation.
By creating a multi-step interface, we collect a high quality annotated dataset of 8K short newswire narratives and ROCStories.
Our dataset, POQue, enables the exploration and development of models that address multiple aspects of semantic understanding.
- Score: 26.59626509200256
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge about outcomes is critical for complex event understanding but is
hard to acquire. We show that by pre-identifying a participant in a complex
event, crowd workers are able to (1) infer the collective impact of salient
events that make up the situation, (2) annotate the volitional engagement of
participants in causing the situation, and (3) ground the outcome of the
situation in state changes of the participants. By creating a multi-step
interface and a careful quality control strategy, we collect a high quality
annotated dataset of 8K short newswire narratives and ROCStories with high
inter-annotator agreement (0.74-0.96 weighted Fleiss Kappa). Our dataset, POQue
(Participant Outcome Questions), enables the exploration and development of
models that address multiple aspects of semantic understanding. Experimentally,
we show that current language models lag behind human performance in subtle
ways through our task formulations that target abstract and specific
comprehension of a complex event, its outcome, and a participant's influence
over the event culmination.
Related papers
- Grounding Partially-Defined Events in Multimodal Data [61.0063273919745]
We introduce a multimodal formulation for partially-defined events and cast the extraction of these events as a three-stage span retrieval task.
We propose a benchmark for this task, MultiVENT-G, that consists of 14.5 hours of densely annotated current event videos and 1,168 text documents, containing 22.8K labeled event-centric entities.
Results illustrate the challenges that abstract event understanding poses and demonstrates promise in event-centric video-language systems.
arXiv Detail & Related papers (2024-10-07T17:59:48Z) - SAGA: A Participant-specific Examination of Story Alternatives and Goal Applicability for a Deeper Understanding of Complex Events [13.894639630989563]
We argue that such knowledge can be elicited through a participant achievement lens.
We analyze a complex event in a narrative according to the intended achievements of the participants.
We show that smaller models fine-tuned on our dataset can achieve performance surpassing larger models.
arXiv Detail & Related papers (2024-08-11T14:52:40Z) - Event prediction and causality inference despite incomplete information [0.41232474244672235]
We explored the challenge of predicting and explaining the occurrence of events within sequences of data points.
Our focus was particularly on scenarios in which unknown triggers causing the occurrence of events may consist of non-consecutive, masked, noisy data points.
We combined analytical, simulation, and machine learning approaches to investigate, quantify, and provide solutions.
arXiv Detail & Related papers (2024-06-09T19:23:20Z) - EVIT: Event-Oriented Instruction Tuning for Event Reasoning [18.012724531672813]
Event reasoning aims to infer events according to certain relations and predict future events.
Large language models (LLMs) have made significant advancements in event reasoning owing to their wealth of knowledge and reasoning capabilities.
However, smaller instruction-tuned models currently in use do not consistently demonstrate exceptional proficiency in managing these tasks.
arXiv Detail & Related papers (2024-04-18T08:14:53Z) - Complex Reasoning over Logical Queries on Commonsense Knowledge Graphs [61.796960984541464]
We present COM2 (COMplex COMmonsense), a new dataset created by sampling logical queries.
We verbalize them using handcrafted rules and large language models into multiple-choice and text generation questions.
Experiments show that language models trained on COM2 exhibit significant improvements in complex reasoning ability.
arXiv Detail & Related papers (2024-03-12T08:13:52Z) - Enhancing HOI Detection with Contextual Cues from Large Vision-Language Models [56.257840490146]
ConCue is a novel approach for improving visual feature extraction in HOI detection.
We develop a transformer-based feature extraction module with a multi-tower architecture that integrates contextual cues into both instance and interaction detectors.
arXiv Detail & Related papers (2023-11-26T09:11:32Z) - Modeling Complex Event Scenarios via Simple Entity-focused Questions [58.16787028844743]
We propose a question-guided generation framework that models events in complex scenarios as answers to questions about participants.
At any step in the generation process, the framework uses the previously generated events as context, but generates the next event as an answer to one of three questions.
Our empirical evaluation shows that this question-guided generation provides better coverage of participants, diverse events within a domain, and comparable perplexities for modeling event sequences.
arXiv Detail & Related papers (2023-02-14T15:48:56Z) - Learning Constraints and Descriptive Segmentation for Subevent Detection [74.48201657623218]
We propose an approach to learning and enforcing constraints that capture dependencies between subevent detection and EventSeg prediction.
We adopt Rectifier Networks for constraint learning and then convert the learned constraints to a regularization term in the loss function of the neural model.
arXiv Detail & Related papers (2021-09-13T20:50:37Z) - "What Are You Trying to Do?" Semantic Typing of Event Processes [94.3499255880101]
This paper studies a new cognitively motivated semantic typing task, multi-axis event process typing.
We develop a large dataset containing over 60k event processes, featuring ultra fine-grained typing on both the action and object type axes.
We propose a hybrid learning framework, P2GT, which addresses the challenging typing problem with indirect supervision from glosses1and a joint learning-to-rank framework.
arXiv Detail & Related papers (2020-10-13T22:37:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.