The Dual Role of Abstracting over the Irrelevant in Symbolic Explanations: Cognitive Effort vs. Understanding
- URL: http://arxiv.org/abs/2602.03467v1
- Date: Tue, 03 Feb 2026 12:42:41 GMT
- Title: The Dual Role of Abstracting over the Irrelevant in Symbolic Explanations: Cognitive Effort vs. Understanding
- Authors: Zeynep G. Saribatur, Johannes Langer, Ute Schmid,
- Abstract summary: Explanations are central to human cognition, yet AI systems often produce outputs that are difficult to understand.<n>We investigate how formal abstractions, specifically removal and clustering, impact human reasoning performance and cognitive effort.
- Score: 5.17306267654726
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explanations are central to human cognition, yet AI systems often produce outputs that are difficult to understand. While symbolic AI offers a transparent foundation for interpretability, raw logical traces often impose a high extraneous cognitive load. We investigate how formal abstractions, specifically removal and clustering, impact human reasoning performance and cognitive effort. Utilizing Answer Set Programming (ASP) as a formal framework, we define a notion of irrelevant details to be abstracted over to obtain simplified explanations. Our cognitive experiments, in which participants classified stimuli across domains with explanations derived from an answer set program, show that clustering details significantly improve participants' understanding, while removal of details significantly reduce cognitive effort, supporting the hypothesis that abstraction enhances human-centered symbolic explanations.
Related papers
- Toward Cognitive Supersensing in Multimodal Large Language Model [67.15559571626747]
We introduce Cognitive Supersensing, a training paradigm that endows MLLMs with human-like visual imagery capabilities.<n>In experiments, MLLMs trained with Cognitive Supersensing significantly outperform state-of-the-art baselines on CogSense-Bench.<n>We will open-source the CogSense-Bench and our model weights.
arXiv Detail & Related papers (2026-02-02T02:19:50Z) - How do Transformers Learn Implicit Reasoning? [67.02072851088637]
We study how implicit multi-hop reasoning emerges by training transformers from scratch in a controlled symbolic environment.<n>We find that training with atomic triples is not necessary but accelerates learning, and that second-hop generalization relies on query-level exposure to specific compositional structures.
arXiv Detail & Related papers (2025-05-29T17:02:49Z) - Abductive Symbolic Solver on Abstraction and Reasoning Corpus [5.903948032748941]
Humans solve visual reasoning tasks based on their observations and hypotheses, and they can explain their solutions with a proper reason.<n>Previous approaches focused only on the grid transition and it is not enough for AI to provide reasonable and human-like solutions.<n>We propose a novel framework that symbolically represents the observed data into a knowledge graph and extracts core knowledge that can be used for solution generation.
arXiv Detail & Related papers (2024-11-27T09:09:00Z) - Neural Causal Abstractions [63.21695740637627]
We develop a new family of causal abstractions by clustering variables and their domains.
We show that such abstractions are learnable in practical settings through Neural Causal Models.
Our experiments support the theory and illustrate how to scale causal inferences to high-dimensional settings involving image data.
arXiv Detail & Related papers (2024-01-05T02:00:27Z) - An Information Bottleneck Characterization of the Understanding-Workload
Tradeoff [15.90243405031747]
Consideration of human factors that impact explanation efficacy is central to explainable AI (XAI) design.
Existing work in XAI has demonstrated a tradeoff between understanding and workload induced by different types of explanations.
arXiv Detail & Related papers (2023-10-11T18:35:26Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Learning Differentiable Logic Programs for Abstract Visual Reasoning [22.167393386879294]
Differentiable forward reasoning has been developed to integrate reasoning with gradient-based machine learning paradigms.<n>NEUMANN is a graph-based differentiable forward reasoner, passing messages in a memory-efficient manner and handling structured programs with functors.<n>We demonstrate that NEUMANN solves visual reasoning tasks efficiently, outperforming neural, symbolic, and neuro-symbolic baselines.
arXiv Detail & Related papers (2023-07-03T11:02:40Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - A Closer Look at Reward Decomposition for High-Level Robotic
Explanations [18.019811754800767]
We propose an explainable Q-Map learning framework that combines reward decomposition with abstracted action spaces.
We demonstrate the effectiveness of our framework through quantitative and qualitative analysis of two robotic scenarios.
arXiv Detail & Related papers (2023-04-25T16:01:42Z) - Towards Relatable Explainable AI with the Perceptual Process [5.581885362337179]
We argue that explanations must be more relatable to other concepts, hypotheticals, and associations.
Inspired by cognitive psychology, we propose the XAI Perceptual Processing Framework and RexNet model for relatable explainable AI.
arXiv Detail & Related papers (2021-12-28T05:48:53Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.