From Priors to Predictions: Explaining and Visualizing Human Reasoning in a Graph Neural Network Framework
- URL: http://arxiv.org/abs/2512.17255v1
- Date: Fri, 19 Dec 2025 05:56:48 GMT
- Title: From Priors to Predictions: Explaining and Visualizing Human Reasoning in a Graph Neural Network Framework
- Authors: Quan Do, Caroline Ahn, Leah Bakst, Michael Pascale, Joseph T. McGuire, Chantal E. Stern, Michael E. Hasselmo,
- Abstract summary: We formalize inductive biases as explicit, manipulable priors over structure and abstraction.<n>We show that differences in graph-based priors can explain individual differences in human solutions.<n>This work provides a principled, interpretable framework for modeling the representational assumptions and computational dynamics underlying generalization.
- Score: 0.32834818175343855
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humans excel at solving novel reasoning problems from minimal exposure, guided by inductive biases, assumptions about which entities and relationships matter. Yet the computational form of these biases and their neural implementation remain poorly understood. We introduce a framework that combines Graph Theory and Graph Neural Networks (GNNs) to formalize inductive biases as explicit, manipulable priors over structure and abstraction. Using a human behavioral dataset adapted from the Abstraction and Reasoning Corpus (ARC), we show that differences in graph-based priors can explain individual differences in human solutions. Our method includes an optimization pipeline that searches over graph configurations, varying edge connectivity and node abstraction, and a visualization approach that identifies the computational graph, the subset of nodes and edges most critical to a model's prediction. Systematic ablation reveals how generalization depends on specific prior structures and internal processing, exposing why human like errors emerge from incorrect or incomplete priors. This work provides a principled, interpretable framework for modeling the representational assumptions and computational dynamics underlying generalization, offering new insights into human reasoning and a foundation for more human aligned AI systems.
Related papers
- GraIP: A Benchmarking Framework For Neural Graph Inverse Problems [31.17028900874544]
We introduce the Neural Graph Inverse Problem conceptual framework, which formalizes and reframes a broad class of graph learning tasks as inverse problems.<n>We demonstrate the versatility of GraIP across various graph learning tasks, including rewiring, causal discovery, and neural relational inference.
arXiv Detail & Related papers (2026-01-26T19:28:16Z) - Structural Graph Neural Networks with Anatomical Priors for Explainable Chest X-ray Diagnosis [0.0]
We present a structural graph reasoning framework that incorporates explicit anatomical priors for explainable vision-based diagnosis.<n>We introduce a custom structural propagation mechanism that explicitly models relative spatial relations as part of the reasoning process.<n>The framework is domain-agnostic and aligns with the broader vision of graph-based reasoning across artificial intelligence systems.
arXiv Detail & Related papers (2026-01-17T09:41:07Z) - Enhancing Explainability of Graph Neural Networks Through Conceptual and Structural Analyses and Their Extensions [0.9645196221785692]
Graph Neural Networks (GNNs) have become a powerful tool for modeling and analyzing data with graph structures.<n>Current Explainable AI (XAI) methods struggle to untangle the intricate relationships and interactions within graphs.<n>This thesis seeks to develop a novel XAI framework tailored for graph-based machine learning.
arXiv Detail & Related papers (2025-12-09T08:13:31Z) - Foundations and Frontiers of Graph Learning Theory [81.39078977407719]
Recent advancements in graph learning have revolutionized the way to understand and analyze data with complex structures.
Graph Neural Networks (GNNs), i.e. neural network architectures designed for learning graph representations, have become a popular paradigm.
This article provides a comprehensive summary of the theoretical foundations and breakthroughs concerning the approximation and learning behaviors intrinsic to prevalent graph learning models.
arXiv Detail & Related papers (2024-07-03T14:07:41Z) - On Discprecncies between Perturbation Evaluations of Graph Neural
Network Attributions [49.8110352174327]
We assess attribution methods from a perspective not previously explored in the graph domain: retraining.
The core idea is to retrain the network on important (or not important) relationships as identified by the attributions.
We run our analysis on four state-of-the-art GNN attribution methods and five synthetic and real-world graph classification datasets.
arXiv Detail & Related papers (2024-01-01T02:03:35Z) - Towards Human-like Perception: Learning Structural Causal Model in
Heterogeneous Graph [26.361815957385417]
This study introduces a novel solution, HG-SCM (Heterogeneous Graph as Structural Causal Model)
It can mimic the human perception and decision process through two key steps: constructing intelligible variables based on semantics derived from the graph schema and automatically learning task-level causal relationships among these variables by incorporating advanced causal discovery techniques.
HG-SCM achieved the highest average performance rank with minimal standard deviation, substantiating its effectiveness and superiority in terms of both predictive power and generalizability.
arXiv Detail & Related papers (2023-12-10T04:34:35Z) - On the Expressiveness and Generalization of Hypergraph Neural Networks [77.65788763444877]
This extended abstract describes a framework for analyzing the expressiveness, learning, and (structural) generalization of hypergraph neural networks (HyperGNNs)
Specifically, we focus on how HyperGNNs can learn from finite datasets and generalize structurally to graph reasoning problems of arbitrary input sizes.
arXiv Detail & Related papers (2023-03-09T18:42:18Z) - A Theory of Link Prediction via Relational Weisfeiler-Leman on Knowledge
Graphs [6.379544211152605]
Graph neural networks are prominent models for representation learning over graph-structured data.
Our goal is to provide a systematic understanding of the landscape of graph neural networks for knowledge graphs.
arXiv Detail & Related papers (2023-02-04T17:40:03Z) - MEGAN: Multi-Explanation Graph Attention Network [1.1470070927586016]
We propose a multi-explanation graph attention network (MEGAN)
Unlike existing graph explainability methods, our network can produce node and edge attributional explanations along multiple channels.
Our attention-based network is fully differentiable and explanations can actively be trained in an explanation-supervised manner.
arXiv Detail & Related papers (2022-11-23T16:10:13Z) - Towards Explanation for Unsupervised Graph-Level Representation Learning [108.31036962735911]
Existing explanation methods focus on the supervised settings, eg, node classification and graph classification, while the explanation for unsupervised graph-level representation learning is still unexplored.
In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, textitUnsupervised Subgraph Information Bottleneck (USIB)
We also theoretically analyze the connection between graph representations and explanatory subgraphs on the label space, which reveals that the robustness of representations benefit the fidelity of explanatory subgraphs.
arXiv Detail & Related papers (2022-05-20T02:50:15Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Learning to Induce Causal Structure [29.810917060087117]
We propose a neural network architecture that learns the mapping from both observational and interventional data to graph structures via supervised training on synthetic graphs.
We show that the proposed model generalizes not only to new synthetic graphs but also to naturalistic graphs.
arXiv Detail & Related papers (2022-04-11T05:38:22Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.