Right for the Right Reasons: Avoiding Reasoning Shortcuts via Prototypical Neurosymbolic AI
- URL: http://arxiv.org/abs/2510.25497v1
- Date: Wed, 29 Oct 2025 13:21:28 GMT
- Title: Right for the Right Reasons: Avoiding Reasoning Shortcuts via Prototypical Neurosymbolic AI
- Authors: Luca Andolfi, Eleonora Giunchiglia,
- Abstract summary: Neurosymbolic AI is growing in popularity thanks to its ability to combine neural perception and symbolic reasoning.<n>In this paper, we address reasoning shortcuts at their root cause and we introduce prototypical neurosymbolic architectures.<n>Our findings pave the way to prototype grounding as an effective, annotation-efficient strategy for safe and reliable neurosymbolic learning.
- Score: 6.518655316889539
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neurosymbolic AI is growing in popularity thanks to its ability to combine neural perception and symbolic reasoning in end-to-end trainable models. However, recent findings reveal these are prone to shortcut reasoning, i.e., to learning unindented concepts--or neural predicates--which exploit spurious correlations to satisfy the symbolic constraints. In this paper, we address reasoning shortcuts at their root cause and we introduce prototypical neurosymbolic architectures. These models are able to satisfy the symbolic constraints (be right) because they have learnt the correct basic concepts (for the right reasons) and not because of spurious correlations, even in extremely low data regimes. Leveraging the theory of prototypical learning, we demonstrate that we can effectively avoid reasoning shortcuts by training the models to satisfy the background knowledge while taking into account the similarity of the input with respect to the handful of labelled datapoints. We extensively validate our approach on the recently proposed rsbench benchmark suite in a variety of settings and tasks with very scarce supervision: we show significant improvements in learning the right concepts both in synthetic tasks (MNIST-EvenOdd and Kand-Logic) and real-world, high-stake ones (BDD-OIA). Our findings pave the way to prototype grounding as an effective, annotation-efficient strategy for safe and reliable neurosymbolic learning.
Related papers
- Symbol Grounding in Neuro-Symbolic AI: A Gentle Introduction to Reasoning Shortcuts [30.036038864025457]
Neuro-symbolic (NeSy) AI aims to develop deep neural networks whose predictions comply with prior knowledge encoding.<n>NeSy models can be affected by Reasoning Shortcuts (RSs)<n>RSs can compromise the interpretability of the model's explanations, performance in out-of-distribution scenarios, and therefore reliability.<n>This overview addresses this issue by providing a gentle introduction to RSs, discussing their causes and consequences in intuitive terms.
arXiv Detail & Related papers (2025-10-16T10:28:34Z) - Advanced Weakly-Supervised Formula Exploration for Neuro-Symbolic Mathematical Reasoning [18.937801725778538]
We propose an advanced practice for neuro-symbolic reasoning systems to explore the intermediate labels with weak supervision from problem inputs and final outputs.<n>Our experiments on the Mathematics dataset illustrated the effectiveness of our proposals from multiple aspects.
arXiv Detail & Related papers (2025-02-02T02:34:36Z) - Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
arXiv Detail & Related papers (2024-02-02T20:33:14Z) - A Novel Neural-symbolic System under Statistical Relational Learning [47.30190559449236]
We propose a neural-symbolic framework based on statistical relational learning, referred to as NSF-SRL.<n>Results of symbolic reasoning are utilized to refine and correct the predictions made by deep learning models, while deep learning models enhance the efficiency of the symbolic reasoning process.<n>We believe that this approach sets a new standard for neural-symbolic systems and will drive future research in the field of general artificial intelligence.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - Not All Neuro-Symbolic Concepts Are Created Equal: Analysis and
Mitigation of Reasoning Shortcuts [24.390922632057627]
Neuro-Symbolic (NeSy) predictive models hold the promise of improved compliance with given constraints.
They allow to infer labels that are consistent with some prior knowledge by reasoning over high-level concepts extracted from sub-symbolic inputs.
It was recently shown that NeSy predictors are affected by reasoning shortcuts: they can attain high accuracy but by leveraging concepts with unintended semantics, thus coming short of their promised advantages.
arXiv Detail & Related papers (2023-05-31T15:35:48Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Abductive Knowledge Induction From Raw Data [12.868722327487752]
We present Abductive Meta-Interpretive Learning ($Meta_Abd$) that unites abduction and induction to learn neural networks and induce logic theories jointly from raw data.
Experimental results demonstrate that $Meta_Abd$ not only outperforms the compared systems in predictive accuracy and data efficiency.
arXiv Detail & Related papers (2020-10-07T16:33:28Z) - Closed Loop Neural-Symbolic Learning via Integrating Neural Perception,
Grammar Parsing, and Symbolic Reasoning [134.77207192945053]
Prior methods learn the neural-symbolic models using reinforcement learning approaches.
We introduce the textbfgrammar model as a textitsymbolic prior to bridge neural perception and symbolic reasoning.
We propose a novel textbfback-search algorithm which mimics the top-down human-like learning procedure to propagate the error.
arXiv Detail & Related papers (2020-06-11T17:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.