Lost in Recursion: Mining Rich Event Semantics in Knowledge Graphs
- URL: http://arxiv.org/abs/2404.16405v1
- Date: Thu, 25 Apr 2024 08:33:08 GMT
- Title: Lost in Recursion: Mining Rich Event Semantics in Knowledge Graphs
- Authors: Florian Plötzky, Niklas Kiehne, Wolf-Tilo Balke,
- Abstract summary: We show how narratives concerning complex events can be constructed and utilized.
We provide an algorithm that mines such narratives from texts to account for different perspectives on complex events.
- Score: 2.657233098224094
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Our world is shaped by events of various complexity. This includes both small-scale local events like local farmer markets and large complex events like political and military conflicts. The latter are typically not observed directly but through the lenses of intermediaries like newspapers or social media. In other words, we do not witness the unfolding of such events directly but are confronted with narratives surrounding them. Such narratives capture different aspects of a complex event and may also differ with respect to the narrator. Thus, they provide a rich semantics concerning real-world events. In this paper, we show how narratives concerning complex events can be constructed and utilized. We provide a formal representation of narratives based on recursive nodes to represent multiple levels of detail and discuss how narratives can be bound to event-centric knowledge graphs. Additionally, we provide an algorithm based on incremental prompting techniques that mines such narratives from texts to account for different perspectives on complex events. Finally, we show the effectiveness and future research directions in a proof of concept.
Related papers
- Double Mixture: Towards Continual Event Detection from Speech [60.33088725100812]
Speech event detection is crucial for multimedia retrieval, involving the tagging of both semantic and acoustic events.
This paper tackles two primary challenges in speech event detection: the continual integration of new events without forgetting previous ones, and the disentanglement of semantic from acoustic events.
We propose a novel method, 'Double Mixture,' which merges speech expertise with robust memory mechanisms to enhance adaptability and prevent forgetting.
arXiv Detail & Related papers (2024-04-20T06:32:00Z) - Whats New? Identifying the Unfolding of New Events in Narratives [11.058053956455545]
We study the Information Status (IS) of the events and propose a novel challenging task: the automatic identification of new events in a narrative.
We define an event as a triplet of subject, predicate, and object. The event is categorized as new with respect to the discourse context.
We annotated a publicly available corpus of narratives with the new events at sentence level using human annotators.
arXiv Detail & Related papers (2023-02-15T15:54:01Z) - Zero-Shot On-the-Fly Event Schema Induction [61.91468909200566]
We present a new approach in which large language models are utilized to generate source documents that allow predicting, given a high-level event definition, the specific events, arguments, and relations between them.
Using our model, complete schemas on any topic can be generated on-the-fly without any manual data collection, i.e., in a zero-shot manner.
arXiv Detail & Related papers (2022-10-12T14:37:00Z) - Beyond Grounding: Extracting Fine-Grained Event Hierarchies Across
Modalities [43.048896440009784]
We propose the task of extracting event hierarchies from multimodal (video and text) data.
This reveals the structure of events and is critical to understanding them.
We show the limitations of state-of-the-art unimodal and multimodal baselines on this task.
arXiv Detail & Related papers (2022-06-14T23:24:15Z) - It's the Same Old Story! Enriching Event-Centric Knowledge Graphs by
Narrative Aspects [0.3655021726150368]
We introduce a novel and lightweight structure for event-centric knowledge graphs, which for the first time allows for queries incorporating viewpoint-dependent and narrative aspects.
Our experiments prove the effective incorporation of subjective attributions for event participants and show the benefits of specifically tailored indexes for narrative query processing.
arXiv Detail & Related papers (2022-05-08T14:00:41Z) - Computational Lens on Cognition: Study Of Autobiographical Versus
Imagined Stories With Large-Scale Language Models [95.88620740809004]
We study differences in the narrative flow of events in autobiographical versus imagined stories using GPT-3.
We found that imagined stories have higher sequentiality than autobiographical stories.
In comparison to imagined stories, autobiographical stories contain more concrete words and words related to the first person.
arXiv Detail & Related papers (2022-01-07T20:10:47Z) - ESTER: A Machine Reading Comprehension Dataset for Event Semantic
Relation Reasoning [49.795767003586235]
We introduce ESTER, a comprehensive machine reading comprehension dataset for Event Semantic Relation Reasoning.
We study five most commonly used event semantic relations and formulate them as question answering tasks.
Experimental results show that the current SOTA systems achieve 60.5%, 57.8%, and 76.3% for event-based F1, token based F1 and HIT@1 scores respectively.
arXiv Detail & Related papers (2021-04-16T19:59:26Z) - Paragraph-level Commonsense Transformers with Recurrent Memory [77.4133779538797]
We train a discourse-aware model that incorporates paragraph-level information to generate coherent commonsense inferences from narratives.
Our results show that PARA-COMET outperforms the sentence-level baselines, particularly in generating inferences that are both coherent and novel.
arXiv Detail & Related papers (2020-10-04T05:24:12Z) - Screenplay Summarization Using Latent Narrative Structure [78.45316339164133]
We propose to explicitly incorporate the underlying structure of narratives into general unsupervised and supervised extractive summarization models.
We formalize narrative structure in terms of key narrative events (turning points) and treat it as latent in order to summarize screenplays.
Experimental results on the CSI corpus of TV screenplays, which we augment with scene-level summarization labels, show that latent turning points correlate with important aspects of a CSI episode.
arXiv Detail & Related papers (2020-04-27T11:54:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.