Abstract Activation Spaces for Content-Invariant Reasoning in Large Language Models
- URL: http://arxiv.org/abs/2602.02462v1
- Date: Mon, 02 Feb 2026 18:48:44 GMT
- Title: Abstract Activation Spaces for Content-Invariant Reasoning in Large Language Models
- Authors: Gabriele Maraia, Marco Valentino, Fabio Massimo Zanzotto, Leonardo Ranaldi,
- Abstract summary: We introduce a framework for abstraction-guided reasoning that explicitly separates structural inference from lexical semantics.<n>We show that abstraction-aligned steering reduces content-driven errors and improves validity-sensitive performance.
- Score: 28.102903742881576
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) often struggle with deductive judgment in syllogistic reasoning, systematically conflating semantic plausibility with formal validity a phenomenon known as content effect. This bias persists even when models generate step-wise explanations, indicating that intermediate rationales may inherit the same semantic shortcuts that affect answers. Recent approaches propose mitigating this issue by increasing inference-time structural constraints, either by encouraging abstract intermediate representations or by intervening directly in the model's internal computations; however, reliably suppressing semantic interference remains an open challenge. To make formal deduction less sensitive to semantic content, we introduce a framework for abstraction-guided reasoning that explicitly separates structural inference from lexical semantics. We construct paired content-laden and abstract syllogisms and use the model's activations on abstract inputs to define an abstract reasoning space. We then learn lightweight Abstractors that, from content-conditioned residual-stream states, predict representations aligned with this space and integrate these predictions via multi-layer interventions during the forward pass. Using cross-lingual transfer as a test bed, we show that abstraction-aligned steering reduces content-driven errors and improves validity-sensitive performance. Our results position activation-level abstraction as a scalable mechanism for enhancing the robustness of formal reasoning in LLMs against semantic interference.
Related papers
- Latent Thoughts Tuning: Bridging Context and Reasoning with Fused Information in Latent Tokens [13.653741247835091]
Latent Thoughts Tuning (LT-Tuning) is a framework that redefines how latent thoughts are constructed and deployed.<n>We introduce a Context-Prediction-Fusion mechanism that jointly leveraging contextual hidden states and predictive semantic guidance.<n>Our method outperforms existing latent reasoning baselines, effectively mitigating feature collapse and achieving robust reasoning accuracy.
arXiv Detail & Related papers (2026-02-10T19:19:10Z) - Emergent Structured Representations Support Flexible In-Context Inference in Large Language Models [77.98801218316505]
Large language models (LLMs) exhibit emergent behaviors suggestive of human-like reasoning.<n>We investigate the internal processing of LLMs during in-context concept inference.
arXiv Detail & Related papers (2026-02-08T03:14:39Z) - Process In-Context Learning: Enhancing Mathematical Reasoning via Dynamic Demonstration Insertion [11.708864769915857]
We propose Process In-Context Learning (PICL) to boost mathematical reasoning by responding to real-time inference needs.<n>PICL operates in two stages: 1)it identifies potential confusion points by analyzing semantics and entropy in the reasoning process and summarizes their core characteristics.<n>It retrieves relevant demonstrations from the demonstration pool that match the confusion context and inserts them directly into the ongoing reasoning process to guide subsequent steps.
arXiv Detail & Related papers (2026-01-17T09:20:06Z) - Stable Language Guidance for Vision-Language-Action Models [62.80963701282789]
Residual Semantic Steering is a probabilistic framework that disentangles physical affordance from semantic execution.<n> RSS achieves state-of-the-art robustness, maintaining performance even under adversarial linguistic perturbations.
arXiv Detail & Related papers (2026-01-07T16:16:10Z) - Comparative Expressivity for Structured Argumentation Frameworks with Uncertain Rules and Premises [0.7967000209136494]
We study plausible instantiations of abstract models structured within rules and premises.<n>Our main technical contributions are the introduction of a notion of expressivity that can handle abstract and structured formalisms.
arXiv Detail & Related papers (2025-10-21T13:36:38Z) - Explaining multimodal LLMs via intra-modal token interactions [55.27436637894534]
Multimodal Large Language Models (MLLMs) have achieved remarkable success across diverse vision-language tasks, yet their internal decision-making mechanisms remain insufficiently understood.<n>We propose enhancing interpretability by leveraging intra-modal interaction.
arXiv Detail & Related papers (2025-09-26T14:39:13Z) - Causal Abstraction Inference under Lossy Representations [53.18851962820361]
We introduce a new type of abstractions called projected abstractions that generalize existing definitions to accommodate lossy representations.<n>We show how to construct a projected abstraction from the low-level model and how it translates equivalent observational, interventional, and counterfactual causal queries from low to high-level.
arXiv Detail & Related papers (2025-09-25T21:20:42Z) - Implicit Reasoning in Large Language Models: A Comprehensive Survey [67.53966514728383]
Large Language Models (LLMs) have demonstrated strong generalization across a wide range of tasks.<n>Recent studies have shifted attention from explicit chain-of-thought prompting toward implicit reasoning.<n>This survey introduces a taxonomy centered on execution paradigms, shifting the focus from representational forms to computational strategies.
arXiv Detail & Related papers (2025-09-02T14:16:02Z) - Mitigating Content Effects on Reasoning in Language Models through Fine-Grained Activation Steering [14.298418197820912]
Large language models (LLMs) frequently demonstrate reasoning limitations, often conflating content plausibility with logical validity.<n>This can result in biased inferences, where plausible arguments are incorrectly deemed logically valid or vice versa.<n>This paper investigates the problem of mitigating content biases on formal reasoning through activation steering.
arXiv Detail & Related papers (2025-05-18T01:34:34Z) - Sequential Representation Learning via Static-Dynamic Conditional Disentanglement [58.19137637859017]
This paper explores self-supervised disentangled representation learning within sequential data, focusing on separating time-independent and time-varying factors in videos.
We propose a new model that breaks the usual independence assumption between those factors by explicitly accounting for the causal relationship between the static/dynamic variables.
Experiments show that the proposed approach outperforms previous complex state-of-the-art techniques in scenarios where the dynamics of a scene are influenced by its content.
arXiv Detail & Related papers (2024-08-10T17:04:39Z) - An Experimental Study of Semantic Continuity for Deep Learning Models [11.883949320223078]
We argue that semantic discontinuity results from inappropriate training targets and contributes to notorious issues such as adversarial robustness, interpretability, etc.
We first conduct data analysis to provide evidence of semantic discontinuity in existing deep learning models, and then design a simple semantic continuity constraint which theoretically enables models to obtain smooth gradients and learn semantic-oriented features.
arXiv Detail & Related papers (2020-11-19T12:23:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.