Towards Fine-grained Causal Reasoning and QA
- URL: http://arxiv.org/abs/2204.07408v1
- Date: Fri, 15 Apr 2022 10:12:46 GMT
- Title: Towards Fine-grained Causal Reasoning and QA
- Authors: Linyi Yang, Zhen Wang, Yuxiang Wu, Jie Yang, Yue Zhang
- Abstract summary: Causality is key to the success of NLP applications, especially in high-stakes domains.
This paper introduces a novel fine-grained causal reasoning dataset.
It presents a series of novel predictive tasks in NLP, such as causality detection, event causality extraction, and Causal QA.
- Score: 19.15261898532854
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Understanding causality is key to the success of NLP applications, especially
in high-stakes domains. Causality comes in various perspectives such as enable
and prevent that, despite their importance, have been largely ignored in the
literature. This paper introduces a novel fine-grained causal reasoning dataset
and presents a series of novel predictive tasks in NLP, such as causality
detection, event causality extraction, and Causal QA. Our dataset contains
human annotations of 25K cause-effect event pairs and 24K question-answering
pairs within multi-sentence samples, where each can have multiple causal
relationships. Through extensive experiments and analysis, we show that the
complex relations in our dataset bring unique challenges to state-of-the-art
methods across all three tasks and highlight potential research opportunities,
especially in developing "causal-thinking" methods.
Related papers
- A Review of Bayesian Uncertainty Quantification in Deep Probabilistic Image Segmentation [0.0]
Advancements in image segmentation play an integral role within the broad scope of Deep Learning-based Computer Vision.
Uncertainty quantification has been extensively studied within this context, enabling the expression of model ignorance (epistemic uncertainty) or data ambiguity (aleatoric uncertainty) to prevent uninformed decision-making.
arXiv Detail & Related papers (2024-11-25T13:26:09Z) - Causal Inference with Large Language Model: A Survey [5.651037052334014]
Causal inference has been a pivotal challenge across diverse domains such as medicine and economics.
Recent advancements in natural language processing (NLP) have introduced promising opportunities for traditional causal inference tasks.
arXiv Detail & Related papers (2024-09-15T18:43:11Z) - Missed Causes and Ambiguous Effects: Counterfactuals Pose Challenges for Interpreting Neural Networks [14.407025310553225]
Interpretability research takes counterfactual theories of causality for granted.
Counterfactual theories have problems that bias our findings in specific and predictable ways.
We discuss the implications of these challenges for interpretability researchers.
arXiv Detail & Related papers (2024-07-05T17:53:03Z) - Deriving Causal Order from Single-Variable Interventions: Guarantees & Algorithm [14.980926991441345]
We show that datasets containing interventional data can be effectively extracted under realistic assumptions about the data distribution.
We introduce interventional faithfulness, which relies on comparisons between the marginal distributions of each variable across observational and interventional settings.
We also introduce Intersort, an algorithm designed to infer the causal order from datasets containing large numbers of single-variable interventions.
arXiv Detail & Related papers (2024-05-28T16:07:17Z) - Revisiting Deep Generalized Canonical Correlation Analysis [30.389620125859356]
Canonical correlation analysis is a classic method for discovering latent co-variation that underpins two or more observed random vectors.
Several extensions and variations of CCA have been proposed that have strengthened our capabilities in terms of revealing common random factors from multiview datasets.
In this work, we first revisit the most recent deterministic extensions of deep CCA and highlight the strengths and limitations of these state-of-the-art methods.
arXiv Detail & Related papers (2023-12-20T22:15:10Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Towards CausalGPT: A Multi-Agent Approach for Faithful Knowledge Reasoning via Promoting Causal Consistency in LLMs [55.66353783572259]
Causal-Consistency Chain-of-Thought harnesses multi-agent collaboration to bolster the faithfulness and causality of foundation models.
Our framework demonstrates significant superiority over state-of-the-art methods through extensive and comprehensive evaluations.
arXiv Detail & Related papers (2023-08-23T04:59:21Z) - Causal Triplet: An Open Challenge for Intervention-centric Causal
Representation Learning [98.78136504619539]
Causal Triplet is a causal representation learning benchmark featuring visually more complex scenes.
We show that models built with the knowledge of disentangled or object-centric representations significantly outperform their distributed counterparts.
arXiv Detail & Related papers (2023-01-12T17:43:38Z) - Active Bayesian Causal Inference [72.70593653185078]
We propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning.
ABCI jointly infers a posterior over causal models and queries of interest.
We show that our approach is more data-efficient than several baselines that only focus on learning the full causal graph.
arXiv Detail & Related papers (2022-06-04T22:38:57Z) - BayesIMP: Uncertainty Quantification for Causal Data Fusion [52.184885680729224]
We study the causal data fusion problem, where datasets pertaining to multiple causal graphs are combined to estimate the average treatment effect of a target variable.
We introduce a framework which combines ideas from probabilistic integration and kernel mean embeddings to represent interventional distributions in the reproducing kernel Hilbert space.
arXiv Detail & Related papers (2021-06-07T10:14:18Z) - Towards Causal Representation Learning [96.110881654479]
The two fields of machine learning and graphical causality arose and developed separately.
There is now cross-pollination and increasing interest in both fields to benefit from the advances of the other.
arXiv Detail & Related papers (2021-02-22T15:26:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.