Salience-Aware Event Chain Modeling for Narrative Understanding
- URL: http://arxiv.org/abs/2109.10475v1
- Date: Wed, 22 Sep 2021 01:34:03 GMT
- Title: Salience-Aware Event Chain Modeling for Narrative Understanding
- Authors: Xiyang Zhang, Muhao Chen, Jonathan May
- Abstract summary: We introduce methods for extracting the principal chain from natural language text, by filtering away non-salient events and supportive sentences.
We show that by pre-training large language models on our extracted chains, we obtain improvements in two tasks that benefit from a clear understanding of event chains.
- Score: 22.27295378297949
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Storytelling, whether via fables, news reports, documentaries, or memoirs,
can be thought of as the communication of interesting and related events that,
taken together, form a concrete process. It is desirable to extract the event
chains that represent such processes. However, this extraction remains a
challenging problem. We posit that this is due to the nature of the texts from
which chains are discovered. Natural language text interleaves a narrative of
concrete, salient events with background information, contextualization,
opinion, and other elements that are important for a variety of necessary
discourse and pragmatics acts but are not part of the principal chain of events
being communicated. We introduce methods for extracting this principal chain
from natural language text, by filtering away non-salient events and supportive
sentences. We demonstrate the effectiveness of our methods at isolating
critical event chains by comparing their effect on downstream tasks. We show
that by pre-training large language models on our extracted chains, we obtain
improvements in two tasks that benefit from a clear understanding of event
chains: narrative prediction and event-based temporal question answering. The
demonstrated improvements and ablative studies confirm that our extraction
method isolates critical event chains.
Related papers
- Double Mixture: Towards Continual Event Detection from Speech [60.33088725100812]
Speech event detection is crucial for multimedia retrieval, involving the tagging of both semantic and acoustic events.
This paper tackles two primary challenges in speech event detection: the continual integration of new events without forgetting previous ones, and the disentanglement of semantic from acoustic events.
We propose a novel method, 'Double Mixture,' which merges speech expertise with robust memory mechanisms to enhance adaptability and prevent forgetting.
arXiv Detail & Related papers (2024-04-20T06:32:00Z) - Towards Event Extraction from Speech with Contextual Clues [61.164413398231254]
We introduce the Speech Event Extraction (SpeechEE) task and construct three synthetic training sets and one human-spoken test set.
Compared to event extraction from text, SpeechEE poses greater challenges mainly due to complex speech signals that are continuous and have no word boundaries.
Our method brings significant improvements on all datasets, achieving a maximum F1 gain of 10.7%.
arXiv Detail & Related papers (2024-01-27T11:07:19Z) - NECE: Narrative Event Chain Extraction Toolkit [64.89332212585404]
We introduce NECE, an open-access, document-level toolkit that automatically extracts and aligns narrative events in the temporal order of their occurrence.
We show the high quality of the NECE toolkit and demonstrate its downstream application in analyzing narrative bias regarding gender.
We also openly discuss the shortcomings of the current approach, and potential of leveraging generative models in future works.
arXiv Detail & Related papers (2022-08-17T04:30:58Z) - Unifying Event Detection and Captioning as Sequence Generation via
Pre-Training [53.613265415703815]
We propose a unified pre-training and fine-tuning framework to enhance the inter-task association between event detection and captioning.
Our model outperforms the state-of-the-art methods, and can be further boosted when pre-trained on extra large-scale video-text data.
arXiv Detail & Related papers (2022-07-18T14:18:13Z) - Unsupervised Key Event Detection from Massive Text Corpora [42.31889135421941]
We propose a new task, key event detection at the intermediate level, aiming to detect from a news corpus key events.
This task can bridge event understanding and structuring and is inherently challenging because of the thematic and temporal closeness of key events.
We develop an unsupervised key event detection framework, EvMine, that extracts temporally frequent peak phrases using a novel ttf-itf score.
arXiv Detail & Related papers (2022-06-08T20:31:02Z) - Curriculum Learning for Goal-Oriented Semantic Communications with a
Common Language [60.85719227557608]
A holistic goal-oriented semantic communication framework is proposed to enable a speaker and a listener to cooperatively execute a set of sequential tasks.
A common language based on a hierarchical belief set is proposed to enable semantic communications between speaker and listener.
An optimization problem is defined to determine the perfect and abstract description of the events.
arXiv Detail & Related papers (2022-04-21T22:36:06Z) - Reinforcement Learning-based Dialogue Guided Event Extraction to Exploit
Argument Relations [70.35379323231241]
This paper presents a better approach for event extraction by explicitly utilizing the relationships of event arguments.
We employ reinforcement learning and incremental learning to extract multiple arguments via a multi-turned, iterative process.
Experimental results show that our approach consistently outperforms seven state-of-the-art event extraction methods.
arXiv Detail & Related papers (2021-06-23T13:24:39Z) - Event Argument Extraction using Causal Knowledge Structures [9.56216681584111]
Event Argument extraction refers to the task of extracting structured information from unstructured text for a particular event of interest.
Most of the existing works model this task at a sentence level, restricting the context to a local scope.
We propose an external knowledge aided approach to infuse document-level event information to aid the extraction of complex event arguments.
arXiv Detail & Related papers (2021-05-02T13:59:07Z) - Causal BERT : Language models for causality detection between events
expressed in text [1.0756038762528868]
Causality understanding between events is helpful in many areas, including health care, business risk management and finance.
"Cause-Effect" relationships between natural language events continues to remain a challenge simply because it is often expressed implicitly.
Our proposed methods achieve the state-of-art performance in three different data distributions and can be leveraged for extraction of a causal diagram.
arXiv Detail & Related papers (2020-12-10T04:59:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.