Neurosymbolic Diffusion Models
- URL: http://arxiv.org/abs/2505.13138v1
- Date: Mon, 19 May 2025 14:07:47 GMT
- Title: Neurosymbolic Diffusion Models
- Authors: Emile van Krieken, Pasquale Minervini, Edoardo Ponti, Antonio Vergari,
- Abstract summary: Neurosymbolic (NeSy) predictors combine neural perception with symbolic reasoning to solve tasks like visual reasoning.<n>Standard NeSy predictors assume conditional independence between the symbols they extract, thus limiting their ability to model interactions and uncertainty.<n>We introduce neurosymbolic diffusion models (NeSyDMs), a new class of NeSy predictors that use discrete diffusion to model dependencies between symbols.
- Score: 14.424743331071241
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neurosymbolic (NeSy) predictors combine neural perception with symbolic reasoning to solve tasks like visual reasoning. However, standard NeSy predictors assume conditional independence between the symbols they extract, thus limiting their ability to model interactions and uncertainty - often leading to overconfident predictions and poor out-of-distribution generalisation. To overcome the limitations of the independence assumption, we introduce neurosymbolic diffusion models (NeSyDMs), a new class of NeSy predictors that use discrete diffusion to model dependencies between symbols. Our approach reuses the independence assumption from NeSy predictors at each step of the diffusion process, enabling scalable learning while capturing symbol dependencies and uncertainty quantification. Across both synthetic and real-world benchmarks - including high-dimensional visual path planning and rule-based autonomous driving - NeSyDMs achieve state-of-the-art accuracy among NeSy predictors and demonstrate strong calibration.
Related papers
- Neurosymbolic Reasoning Shortcuts under the Independence Assumption [14.424743331071241]
The ubiquitous independence assumption among symbolic concepts in neurosymbolic (NeSy) predictors is a convenient simplification.<n>We show that assuming independence entails that a model can never represent uncertainty over certain concept combinations.
arXiv Detail & Related papers (2025-07-15T14:27:05Z) - Neurosymbolic AI for Travel Demand Prediction: Integrating Decision Tree Rules into Neural Networks [21.445133878049333]
This study introduces a Neurosymbolic Artificial Intelligence (Neurosymbolic AI) framework that integrates decision tree (DT)-based symbolic rules with neural networks (NNs) to predict travel demand.
arXiv Detail & Related papers (2025-02-02T05:10:31Z) - Relational Neurosymbolic Markov Models [13.22004615196798]
Sequential problems are ubiquitous in AI, such as in reinforcement learning or natural language processing.<n>We introduce neurosymbolic AI (NeSy) which provides a sound formalism to enforce constraints in deep probabilistic models but scales exponentially on sequential problems.<n>We propose a strategy for inference and learning that scales on sequential settings, and that combines approximate Bayesian inference, automated reasoning, and gradient estimation.
arXiv Detail & Related papers (2024-12-17T15:41:51Z) - On the Independence Assumption in Neurosymbolic Learning [14.447011414006719]
State-of-the-art neurosymbolic learning systems use probabilistic reasoning to guide neural networks towards predictions that conform to logical constraints over symbols.
Many such systems assume that the probabilities of the considered symbols are conditionally independent given the input to simplify learning and reasoning.
arXiv Detail & Related papers (2024-04-12T13:09:48Z) - Human Trajectory Forecasting with Explainable Behavioral Uncertainty [63.62824628085961]
Human trajectory forecasting helps to understand and predict human behaviors, enabling applications from social robots to self-driving cars.
Model-free methods offer superior prediction accuracy but lack explainability, while model-based methods provide explainability but cannot predict well.
We show that BNSP-SFM achieves up to a 50% improvement in prediction accuracy, compared with 11 state-of-the-art methods.
arXiv Detail & Related papers (2023-07-04T16:45:21Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Bayesian Networks for the robust and unbiased prediction of depression
and its symptoms utilizing speech and multimodal data [65.28160163774274]
We apply a Bayesian framework to capture the relationships between depression, depression symptoms, and features derived from speech, facial expression and cognitive game data collected at thymia.
arXiv Detail & Related papers (2022-11-09T14:48:13Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Modeling Implicit Bias with Fuzzy Cognitive Maps [0.0]
This paper presents a Fuzzy Cognitive Map model to quantify implicit bias in structured datasets.
We introduce a new reasoning mechanism equipped with a normalization-like transfer function that prevents neurons from saturating.
arXiv Detail & Related papers (2021-12-23T17:04:12Z) - Interpretable Social Anchors for Human Trajectory Forecasting in Crowds [84.20437268671733]
We propose a neural network-based system to predict human trajectory in crowds.
We learn interpretable rule-based intents, and then utilise the expressibility of neural networks to model scene-specific residual.
Our architecture is tested on the interaction-centric benchmark TrajNet++.
arXiv Detail & Related papers (2021-05-07T09:22:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.