Causal schema induction for knowledge discovery
- URL: http://arxiv.org/abs/2303.15381v1
- Date: Mon, 27 Mar 2023 16:55:49 GMT
- Title: Causal schema induction for knowledge discovery
- Authors: Michael Regan and Jena D. Hwang and Keisuke Sakaguchi and James
Pustejovsky
- Abstract summary: We present Torquestra, a dataset of text-graph-schema units integrating temporal, event, and causal structures.
We benchmark our dataset on three knowledge discovery tasks, building and evaluating models for each.
Results show that systems that harness causal structure are effective at identifying texts sharing similar causal meaning components.
- Score: 21.295680010103602
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Making sense of familiar yet new situations typically involves making
generalizations about causal schemas, stories that help humans reason about
event sequences. Reasoning about events includes identifying cause and effect
relations shared across event instances, a process we refer to as causal schema
induction. Statistical schema induction systems may leverage structural
knowledge encoded in discourse or the causal graphs associated with event
meaning, however resources to study such causal structure are few in number and
limited in size. In this work, we investigate how to apply schema induction
models to the task of knowledge discovery for enhanced search of
English-language news texts. To tackle the problem of data scarcity, we present
Torquestra, a manually curated dataset of text-graph-schema units integrating
temporal, event, and causal structures. We benchmark our dataset on three
knowledge discovery tasks, building and evaluating models for each. Results
show that systems that harness causal structure are effective at identifying
texts sharing similar causal meaning components rather than relying on lexical
cues alone. We make our dataset and models available for research purposes.
Related papers
- EventGround: Narrative Reasoning by Grounding to Eventuality-centric Knowledge Graphs [41.928535719157054]
We propose an initial comprehensive framework called EventGround to tackle the problem of grounding free-texts to eventuality-centric knowledge graphs.
We provide simple yet effective parsing and partial information extraction methods to tackle these problems.
Our framework, incorporating grounded knowledge, achieves state-of-the-art performance while providing interpretable evidence.
arXiv Detail & Related papers (2024-03-30T01:16:37Z) - Enhancing Event Causality Identification with Rationale and Structure-Aware Causal Question Answering [30.000134835133522]
Event Causality Identification (DECI) aims to identify causal relations between two events in documents.
Recent research tends to use pre-trained language models to generate the event causal relations.
We propose a multi-task learning framework to enhance event causality identification with rationale and structure-aware causal question answering.
arXiv Detail & Related papers (2024-03-17T07:41:58Z) - A Unified Causal View of Instruction Tuning [76.1000380429553]
We develop a meta Structural Causal Model (meta-SCM) to integrate different NLP tasks under a single causal structure of the data.
Key idea is to learn task-required causal factors and only use those to make predictions for a given task.
arXiv Detail & Related papers (2024-02-09T07:12:56Z) - Discovery of the Hidden World with Large Language Models [100.38157787218044]
We introduce COAT: Causal representatiOn AssistanT.
COAT incorporates LLMs as a factor proposer that extracts the potential causal factors from unstructured data.
LLMs can also be instructed to provide additional information used to collect data values.
arXiv Detail & Related papers (2024-02-06T12:18:54Z) - Multi-modal Causal Structure Learning and Root Cause Analysis [67.67578590390907]
We propose Mulan, a unified multi-modal causal structure learning method for root cause localization.
We leverage a log-tailored language model to facilitate log representation learning, converting log sequences into time-series data.
We also introduce a novel key performance indicator-aware attention mechanism for assessing modality reliability and co-learning a final causal graph.
arXiv Detail & Related papers (2024-02-04T05:50:38Z) - Inducing Causal Structure for Abstractive Text Summarization [76.1000380429553]
We introduce a Structural Causal Model (SCM) to induce the underlying causal structure of the summarization data.
We propose a Causality Inspired Sequence-to-Sequence model (CI-Seq2Seq) to learn the causal representations that can mimic the causal factors.
Experimental results on two widely used text summarization datasets demonstrate the advantages of our approach.
arXiv Detail & Related papers (2023-08-24T16:06:36Z) - GeoAI for Knowledge Graph Construction: Identifying Causality Between
Cascading Events to Support Environmental Resilience Research [3.3072870202596736]
This paper introduces our GeoAI solutions to identify causality among events, in particular, disaster events.
Our solution enriches the event knowledge base and allows for the exploration of linked cascading events in large knowledge graphs.
arXiv Detail & Related papers (2022-11-11T05:31:03Z) - Zero-Shot On-the-Fly Event Schema Induction [61.91468909200566]
We present a new approach in which large language models are utilized to generate source documents that allow predicting, given a high-level event definition, the specific events, arguments, and relations between them.
Using our model, complete schemas on any topic can be generated on-the-fly without any manual data collection, i.e., in a zero-shot manner.
arXiv Detail & Related papers (2022-10-12T14:37:00Z) - Effect Identification in Cluster Causal Diagrams [51.42809552422494]
We introduce a new type of graphical model called cluster causal diagrams (for short, C-DAGs)
C-DAGs allow for the partial specification of relationships among variables based on limited prior knowledge.
We develop the foundations and machinery for valid causal inferences over C-DAGs.
arXiv Detail & Related papers (2022-02-22T21:27:31Z) - Causal BERT : Language models for causality detection between events
expressed in text [1.0756038762528868]
Causality understanding between events is helpful in many areas, including health care, business risk management and finance.
"Cause-Effect" relationships between natural language events continues to remain a challenge simply because it is often expressed implicitly.
Our proposed methods achieve the state-of-art performance in three different data distributions and can be leveraged for extraction of a causal diagram.
arXiv Detail & Related papers (2020-12-10T04:59:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.