Guided Generation of Cause and Effect
- URL: http://arxiv.org/abs/2107.09846v1
- Date: Wed, 21 Jul 2021 02:32:47 GMT
- Title: Guided Generation of Cause and Effect
- Authors: Zhongyang Li, Xiao Ding, Ting Liu, J. Edward Hu, Benjamin Van Durme
- Abstract summary: We present a conditional text generation framework that posits sentential expressions of possible causes and effects.
This framework depends on two novel resources: a large-scale collection of English sentences expressing causal patterns CausalBank and a refinement over previous work on constructing large lexical causal knowledge graphs Cause Effect Graph.
- Score: 52.44584102429394
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a conditional text generation framework that posits sentential
expressions of possible causes and effects. This framework depends on two novel
resources we develop in the course of this work: a very large-scale collection
of English sentences expressing causal patterns CausalBank; and a refinement
over previous work on constructing large lexical causal knowledge graphs Cause
Effect Graph. Further, we extend prior work in lexically-constrained decoding
to support disjunctive positive constraints. Human assessment confirms that our
approach gives high-quality and diverse outputs. Finally, we use CausalBank to
perform continued training of an encoder supporting a recent state-of-the-art
model for causal reasoning, leading to a 3-point improvement on the COPA
challenge set, with no change in model architecture.
Related papers
- Enhancing Systematic Decompositional Natural Language Inference Using Informal Logic [51.967603572656266]
We introduce a consistent and theoretically grounded approach to annotating decompositional entailment.
We find that our new dataset, RDTE, has a substantially higher internal consistency (+9%) than prior decompositional entailment datasets.
We also find that training an RDTE-oriented entailment classifier via knowledge distillation and employing it in an entailment tree reasoning engine significantly improves both accuracy and proof quality.
arXiv Detail & Related papers (2024-02-22T18:55:17Z) - Causal Document-Grounded Dialogue Pre-training [81.16429056652483]
We present a causally-complete dataset construction strategy for building million-level DocGD pre-training corpora.
Experiments on three benchmark datasets demonstrate that our causal pre-training achieves considerable and consistent improvements under fully-supervised, low-resource, few-shot, and zero-shot settings.
arXiv Detail & Related papers (2023-05-18T12:39:25Z) - CausalDialogue: Modeling Utterance-level Causality in Conversations [83.03604651485327]
We have compiled and expanded upon a new dataset called CausalDialogue through crowd-sourcing.
This dataset includes multiple cause-effect pairs within a directed acyclic graph (DAG) structure.
We propose a causality-enhanced method called Exponential Average Treatment Effect (ExMATE) to enhance the impact of causality at the utterance level in training neural conversation models.
arXiv Detail & Related papers (2022-12-20T18:31:50Z) - A Causal Framework to Quantify the Robustness of Mathematical Reasoning
with Language Models [81.15974174627785]
We study the behavior of language models in terms of robustness and sensitivity to direct interventions in the input space.
Our analysis shows that robustness does not appear to continuously improve as a function of size, but the GPT-3 Davinci models (175B) achieve a dramatic improvement in both robustness and sensitivity compared to all other GPT variants.
arXiv Detail & Related papers (2022-10-21T15:12:37Z) - ContraCLM: Contrastive Learning For Causal Language Model [54.828635613501376]
We present ContraCLM, a novel contrastive learning framework at both token-level and sequence-level.
We show that ContraCLM enhances discrimination of the representations and bridges the gap with the encoder-only models.
arXiv Detail & Related papers (2022-10-03T18:56:35Z) - A Causal Lens for Controllable Text Generation [36.26478600135344]
This paper proposes to formulate controllable text generation from a principled causal perspective.
A direct advantage of the causal formulation is the use of rich causality tools to mitigate generation biases and improve control.
Experiments show significant superiority of the causal approach over previous conditional models for improved control accuracy and reduced bias.
arXiv Detail & Related papers (2022-01-22T19:31:43Z) - CURIE: An Iterative Querying Approach for Reasoning About Situations [36.2000733486444]
We propose a method to build a graph of relevant consequences explicitly in a structured situational graph (st-graph) using natural language queries over a finetuned language model (M)
We show that st-graphs generated by CURIE improve a situational reasoning end task (WIQA-QA) by 3 points on accuracy by simply augmenting their input with our generated situational graphs.
arXiv Detail & Related papers (2021-04-01T23:51:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.