Counterfactual reasoning: an analysis of in-context emergence
- URL: http://arxiv.org/abs/2506.05188v2
- Date: Tue, 21 Oct 2025 16:08:36 GMT
- Title: Counterfactual reasoning: an analysis of in-context emergence
- Authors: Moritz Miller, Bernhard Schölkopf, Siyuan Guo,
- Abstract summary: We show that language models are capable of counterfactual reasoning.<n>We find that self-attention, model depth and pre-training data diversity drive performance.<n>Our findings extend to counterfactual reasoning under SDE dynamics.
- Score: 57.118735341305786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large-scale neural language models exhibit remarkable performance in in-context learning: the ability to learn and reason about the input context on the fly. This work studies in-context counterfactual reasoning in language models, that is, the ability to predict consequences of a hypothetical scenario. We focus on a well-defined, synthetic linear regression task that requires noise abduction. Accurate prediction is based on (1) inferring an unobserved latent concept and (2) copying contextual noise from factual observations. We show that language models are capable of counterfactual reasoning. Further, we enhance existing identifiability results and reduce counterfactual reasoning for a broad class of functions to a transformation on in-context observations. In Transformers, we find that self-attention, model depth and pre-training data diversity drive performance. Moreover, we provide mechanistic evidence that the latent concept is linearly represented in the residual stream and we introduce designated \textit{noise abduction heads} central to performing counterfactual reasoning. Lastly, our findings extend to counterfactual reasoning under SDE dynamics and reflect that Transformers can perform noise abduction on sequential data, providing preliminary evidence on the potential for counterfactual story generation. Our code is available under https://github.com/mrtzmllr/iccr.
Related papers
- Shape of Thought: When Distribution Matters More than Correctness in Reasoning Tasks [24.55929874173401]
We show that a language model's reasoning capabilities can be improved by training on datasets of chain-of-thought traces from more capable models.<n>Experiments show this approach can yield better performance on reasoning tasks than training on human-annotated datasets.
arXiv Detail & Related papers (2025-12-24T07:35:55Z) - Context-Informed Grounding Supervision [102.11698329887226]
Context-INformed Grounding Supervision (CINGS) is a post-training supervision in which the model is trained with relevant context prepended to the response.<n>Our experiments demonstrate that models trained with CINGS exhibit stronger grounding in both textual and visual domains.
arXiv Detail & Related papers (2025-06-18T14:13:56Z) - I Predict Therefore I Am: Is Next Token Prediction Enough to Learn Human-Interpretable Concepts from Data? [76.15163242945813]
Large language models (LLMs) have led many to conclude that they exhibit a form of intelligence.<n>We introduce a novel generative model that generates tokens on the basis of human-interpretable concepts represented as latent discrete variables.
arXiv Detail & Related papers (2025-03-12T01:21:17Z) - In-Context Learning (and Unlearning) of Length Biases [19.740652268957522]
We show that models learn length biases in the context window for their predictions.<n>We further empirically analyze the factors that modulate the level of bias exhibited by the model.<n>This reveals the power of in-context learning in debiasing model prediction behaviors without the need for costly parameter updates.
arXiv Detail & Related papers (2025-02-10T16:43:32Z) - Spin glass model of in-context learning [2.285821277711785]
We study a transformer with linear attention and map this structure to a spin glass model with real-valued spins.<n>Our theory reveals that for single-instance learning, increasing the task diversity leads to the emergence of in-context learning.<n>The proposed analytically tractable model thus offers a promising avenue for thinking about how to interpret many intriguing but puzzling properties of large language models.
arXiv Detail & Related papers (2024-08-05T07:54:01Z) - Probabilistic Transformer: A Probabilistic Dependency Model for
Contextual Word Representation [52.270712965271656]
We propose a new model of contextual word representation, not from a neural perspective, but from a purely syntactic and probabilistic perspective.
We find that the graph of our model resembles transformers, with correspondences between dependencies and self-attention.
Experiments show that our model performs competitively to transformers on small to medium sized datasets.
arXiv Detail & Related papers (2023-11-26T06:56:02Z) - In-Context Learning through the Bayesian Prism [16.058624485018207]
In-context learning (ICL) is one of the surprising and useful features of large language models.
In this paper we empirically examine how far this Bayesian perspective can help us understand ICL.
arXiv Detail & Related papers (2023-06-08T02:38:23Z) - Inverse Dynamics Pretraining Learns Good Representations for Multitask
Imitation [66.86987509942607]
We evaluate how such a paradigm should be done in imitation learning.
We consider a setting where the pretraining corpus consists of multitask demonstrations.
We argue that inverse dynamics modeling is well-suited to this setting.
arXiv Detail & Related papers (2023-05-26T14:40:46Z) - Attention-likelihood relationship in transformers [2.8304391396200064]
We analyze how large language models (LLMs) represent out-of-context words, investigating their reliance on the given context to capture their semantics.
Our likelihood-guided text perturbations reveal a correlation between token likelihood and attention values in transformer-based language models.
arXiv Detail & Related papers (2023-03-15T00:23:49Z) - On the Effect of Pre-training for Transformer in Different Modality on
Offline Reinforcement Learning [0.0]
We investigate how pre-training on data of different modalities, such as language and vision, affects fine-tuning of Transformer-based models to Mujoco offline reinforcement learning tasks.
arXiv Detail & Related papers (2022-11-17T13:34:08Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Visual Abductive Reasoning [85.17040703205608]
Abductive reasoning seeks the likeliest possible explanation for partial observations.
We propose a new task and dataset, Visual Abductive Reasoning ( VAR), for examining abductive reasoning ability of machine intelligence in everyday visual situations.
arXiv Detail & Related papers (2022-03-26T10:17:03Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - Recoding latent sentence representations -- Dynamic gradient-based
activation modification in RNNs [0.0]
In RNNs, encoding information in a suboptimal way can impact the quality of representations based on later elements in the sequence.
I propose an augmentation to standard RNNs in form of a gradient-based correction mechanism.
I conduct different experiments in the context of language modeling, where the impact of using such a mechanism is examined in detail.
arXiv Detail & Related papers (2021-01-03T17:54:17Z) - Back to the Future: Unsupervised Backprop-based Decoding for
Counterfactual and Abductive Commonsense Reasoning [79.48769764508006]
generative language models (LMs) can be trained to condition only on the past context or to perform narrowly scoped text-infilling.
We propose DeLorean, a new unsupervised decoding algorithm that can flexibly incorporate both the past and future contexts.
We demonstrate that our approach is general and applicable to two nonmonotonic reasoning tasks: abductive text generation and counterfactual story revision.
arXiv Detail & Related papers (2020-10-12T17:58:43Z) - Leap-Of-Thought: Teaching Pre-Trained Models to Systematically Reason
Over Implicit Knowledge [96.92252296244233]
Large pre-trained language models (LMs) acquire some reasoning capacity, but this ability is difficult to control.
We show that LMs can be trained to reliably perform systematic reasoning combining both implicit, pre-trained knowledge and explicit natural language statements.
Our work paves a path towards open-domain systems that constantly improve by interacting with users who can instantly correct a model by adding simple natural language statements.
arXiv Detail & Related papers (2020-06-11T17:02:20Z) - CausaLM: Causal Model Explanation Through Counterfactual Language Models [33.29636213961804]
CausaLM is a framework for producing causal model explanations using counterfactual language representation models.
We show that language representation models such as BERT can effectively learn a counterfactual representation for a given concept of interest.
A byproduct of our method is a language representation model that is unaffected by the tested concept.
arXiv Detail & Related papers (2020-05-27T15:06:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.