A Cross-Domain Evaluation of Approaches for Causal Knowledge Extraction
- URL: http://arxiv.org/abs/2308.03891v1
- Date: Mon, 7 Aug 2023 19:50:59 GMT
- Title: A Cross-Domain Evaluation of Approaches for Causal Knowledge Extraction
- Authors: Anik Saha, Oktie Hassanzadeh, Alex Gittens, Jian Ni, Kavitha Srinivas,
Bulent Yener
- Abstract summary: Causal knowledge extraction is the task of extracting relevant causes and effects from text by detecting the causal relation.
We perform a thorough analysis of three sequence tagging models for causal knowledge extraction and compare it with a span based approach to causality extraction.
Our experiments show that embeddings from pre-trained language models (e.g. BERT) provide a significant performance boost on this task.
- Score: 12.558498579998862
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Causal knowledge extraction is the task of extracting relevant causes and
effects from text by detecting the causal relation. Although this task is
important for language understanding and knowledge discovery, recent works in
this domain have largely focused on binary classification of a text segment as
causal or non-causal. In this regard, we perform a thorough analysis of three
sequence tagging models for causal knowledge extraction and compare it with a
span based approach to causality extraction. Our experiments show that
embeddings from pre-trained language models (e.g. BERT) provide a significant
performance boost on this task compared to previous state-of-the-art models
with complex architectures. We observe that span based models perform better
than simple sequence tagging models based on BERT across all 4 data sets from
diverse domains with different types of cause-effect phrases.
Related papers
- Multi-modal Causal Structure Learning and Root Cause Analysis [67.67578590390907]
We propose Mulan, a unified multi-modal causal structure learning method for root cause localization.
We leverage a log-tailored language model to facilitate log representation learning, converting log sequences into time-series data.
We also introduce a novel key performance indicator-aware attention mechanism for assessing modality reliability and co-learning a final causal graph.
arXiv Detail & Related papers (2024-02-04T05:50:38Z) - Inducing Causal Structure for Abstractive Text Summarization [76.1000380429553]
We introduce a Structural Causal Model (SCM) to induce the underlying causal structure of the summarization data.
We propose a Causality Inspired Sequence-to-Sequence model (CI-Seq2Seq) to learn the causal representations that can mimic the causal factors.
Experimental results on two widely used text summarization datasets demonstrate the advantages of our approach.
arXiv Detail & Related papers (2023-08-24T16:06:36Z) - Constructing and Interpreting Causal Knowledge Graphs from News [3.3071569417370745]
Many financial jobs rely on news to learn about causal events in the past and present, to make informed decisions and predictions about the future.
We propose a methodology to construct causal knowledge graphs (KGs) from news using two steps: (1) Extraction of Causal Relations, and (2) Argument Clustering and Representation into KG.
arXiv Detail & Related papers (2023-05-16T11:33:32Z) - REKnow: Enhanced Knowledge for Joint Entity and Relation Extraction [30.829001748700637]
Relation extraction is a challenging task that aims to extract all hidden relational facts from the text.
There is no unified framework that works well under various relation extraction settings.
We propose a knowledge-enhanced generative model to mitigate these two issues.
Our model achieves superior performance on multiple benchmarks and settings, including WebNLG, NYT10, and TACRED.
arXiv Detail & Related papers (2022-06-10T13:59:38Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - TAGPRIME: A Unified Framework for Relational Structure Extraction [71.88926365652034]
TAGPRIME is a sequence tagging model that appends priming words about the information of the given condition to the input text.
With the self-attention mechanism in pre-trained language models, the priming words make the output contextualized representations contain more information about the given condition.
Extensive experiments and analyses on three different tasks that cover ten datasets across five different languages demonstrate the generality and effectiveness of TAGPRIME.
arXiv Detail & Related papers (2022-05-25T08:57:46Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - Modeling Multi-Granularity Hierarchical Features for Relation Extraction [26.852869800344813]
We propose a novel method to extract multi-granularity features based solely on the original input sentences.
We show that effective structured features can be attained even without external knowledge.
arXiv Detail & Related papers (2022-04-09T09:44:05Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.