Neurosymbolic Reasoning Shortcuts under the Independence Assumption
- URL: http://arxiv.org/abs/2507.11357v1
- Date: Tue, 15 Jul 2025 14:27:05 GMT
- Title: Neurosymbolic Reasoning Shortcuts under the Independence Assumption
- Authors: Emile van Krieken, Pasquale Minervini, Edoardo Ponti, Antonio Vergari,
- Abstract summary: The ubiquitous independence assumption among symbolic concepts in neurosymbolic (NeSy) predictors is a convenient simplification.<n>We show that assuming independence entails that a model can never represent uncertainty over certain concept combinations.
- Score: 14.424743331071241
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ubiquitous independence assumption among symbolic concepts in neurosymbolic (NeSy) predictors is a convenient simplification: NeSy predictors use it to speed up probabilistic reasoning. Recent works like van Krieken et al. (2024) and Marconato et al. (2024) argued that the independence assumption can hinder learning of NeSy predictors and, more crucially, prevent them from correctly modelling uncertainty. There is, however, scepticism in the NeSy community around the scenarios in which the independence assumption actually limits NeSy systems (Faronius and Dos Martires, 2025). In this work, we settle this question by formally showing that assuming independence among symbolic concepts entails that a model can never represent uncertainty over certain concept combinations. Thus, the model fails to be aware of reasoning shortcuts, i.e., the pathological behaviour of NeSy predictors that predict correct downstream tasks but for the wrong reasons.
Related papers
- Neurosymbolic Diffusion Models [14.424743331071241]
Neurosymbolic (NeSy) predictors combine neural perception with symbolic reasoning to solve tasks like visual reasoning.<n>Standard NeSy predictors assume conditional independence between the symbols they extract, thus limiting their ability to model interactions and uncertainty.<n>We introduce neurosymbolic diffusion models (NeSyDMs), a new class of NeSy predictors that use discrete diffusion to model dependencies between symbols.
arXiv Detail & Related papers (2025-05-19T14:07:47Z) - On the Independence Assumption in Neurosymbolic Learning [14.447011414006719]
State-of-the-art neurosymbolic learning systems use probabilistic reasoning to guide neural networks towards predictions that conform to logical constraints over symbols.
Many such systems assume that the probabilities of the considered symbols are conditionally independent given the input to simplify learning and reasoning.
arXiv Detail & Related papers (2024-04-12T13:09:48Z) - BEARS Make Neuro-Symbolic Models Aware of their Reasoning Shortcuts [21.743306538494043]
Reasoning Shortcuts can affect Neuro-Symbolic (NeSy) predictors.
They learn concepts consistent with symbolic knowledge by exploiting unintended semantics.
We propose to ensure NeSy models are aware of the semantic ambiguity of the concepts they learn.
arXiv Detail & Related papers (2024-02-19T15:54:36Z) - Human Trajectory Forecasting with Explainable Behavioral Uncertainty [63.62824628085961]
Human trajectory forecasting helps to understand and predict human behaviors, enabling applications from social robots to self-driving cars.
Model-free methods offer superior prediction accuracy but lack explainability, while model-based methods provide explainability but cannot predict well.
We show that BNSP-SFM achieves up to a 50% improvement in prediction accuracy, compared with 11 state-of-the-art methods.
arXiv Detail & Related papers (2023-07-04T16:45:21Z) - Probabilistic Concept Bottleneck Models [26.789507935869107]
Interpretable models are designed to make decisions in a human-interpretable manner.
In this study, we address the ambiguity issue that can harm reliability.
We propose Probabilistic Concept Bottleneck Models (ProbCBM)
arXiv Detail & Related papers (2023-06-02T14:38:58Z) - Not All Neuro-Symbolic Concepts Are Created Equal: Analysis and
Mitigation of Reasoning Shortcuts [24.390922632057627]
Neuro-Symbolic (NeSy) predictive models hold the promise of improved compliance with given constraints.
They allow to infer labels that are consistent with some prior knowledge by reasoning over high-level concepts extracted from sub-symbolic inputs.
It was recently shown that NeSy predictors are affected by reasoning shortcuts: they can attain high accuracy but by leveraging concepts with unintended semantics, thus coming short of their promised advantages.
arXiv Detail & Related papers (2023-05-31T15:35:48Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.