Logic of Hypotheses: from Zero to Full Knowledge in Neurosymbolic Integration
- URL: http://arxiv.org/abs/2509.21663v1
- Date: Thu, 25 Sep 2025 22:31:43 GMT
- Title: Logic of Hypotheses: from Zero to Full Knowledge in Neurosymbolic Integration
- Authors: Davide Bizzaro, Alessandro Daniele,
- Abstract summary: Neurosymbolic integration (NeSy) blends neural-network learning with symbolic reasoning.<n>We introduce Logic of Hypotheses (LoH), a novel language that unifies data-driven rule learning with symbolic priors and expert knowledge.
- Score: 46.43084711486819
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neurosymbolic integration (NeSy) blends neural-network learning with symbolic reasoning. The field can be split between methods injecting hand-crafted rules into neural models, and methods inducing symbolic rules from data. We introduce Logic of Hypotheses (LoH), a novel language that unifies these strands, enabling the flexible integration of data-driven rule learning with symbolic priors and expert knowledge. LoH extends propositional logic syntax with a choice operator, which has learnable parameters and selects a subformula from a pool of options. Using fuzzy logic, formulas in LoH can be directly compiled into a differentiable computational graph, so the optimal choices can be learned via backpropagation. This framework subsumes some existing NeSy models, while adding the possibility of arbitrary degrees of knowledge specification. Moreover, the use of Goedel fuzzy logic and the recently developed Goedel trick yields models that can be discretized to hard Boolean-valued functions without any loss in performance. We provide experimental analysis on such models, showing strong results on tabular data and on the Visual Tic-Tac-Toe NeSy task, while producing interpretable decision rules.
Related papers
- On Improving Neurosymbolic Learning by Exploiting the Representation Space [54.16389421332958]
We study the problem of learning neural classifiers in a neurosymbolic setting where the hidden gold labels of input instances must satisfy a logical formula.<n>One challenge is that the space of label combinations can grow exponentially, making learning difficult.<n>We propose a technique that prunes this space by exploiting the intuition that instances with similar latent representations are likely to share the same label.
arXiv Detail & Related papers (2026-02-08T13:56:47Z) - Noise to the Rescue: Escaping Local Minima in Neurosymbolic Local Search [50.24983453990065]
We show that applying BP to Godel logic, which represents conjunction and disjunction as min and max, is equivalent to a local search algorithm for SAT solving.<n>We propose the Godel Trick, which adds noise to the model's logits to escape local optima.
arXiv Detail & Related papers (2025-03-03T18:42:13Z) - On Scaling Neurosymbolic Programming through Guided Logical Inference [1.124958340749622]
We propose a new approach centered around an exact algorithmNL, that enables bypassing the computation of the logical provenance.<n>We show that this approach can be adapted for approximate reasoning with $epsilon$ or $(epsilon, delta)$ guarantees, called ApproxDPNL.
arXiv Detail & Related papers (2025-01-30T08:49:25Z) - Towards Probabilistic Inductive Logic Programming with Neurosymbolic Inference and Relaxation [0.0]
We propose Propper, which handles flawed and probabilistic background knowledge.
For relational patterns in noisy images, Propper can learn programs from as few as 8 examples.
It outperforms binary ILP and statistical models such as a Graph Neural Network.
arXiv Detail & Related papers (2024-08-21T06:38:49Z) - Fuzzy Datalog$^\exists$ over Arbitrary t-Norms [5.464669506214195]
One of the main challenges in the area of Neuro-Symbolic AI is to perform logical reasoning in the presence of both neural and symbolic data.
This requires combining heterogeneous data sources such as knowledge graphs, neural model predictions, structured databases, crowd-sourced data, and many more.
We generalise the standard rule-based language Datalog with existential rules to the setting, by allowing for arbitrary t-norms in the place of classical conjunctions in rule bodies.
The resulting formalism allows us to perform reasoning about associated data with degrees of uncertainty while preserving computational complexity results and the applicability of reasoning techniques established for
arXiv Detail & Related papers (2024-03-05T12:51:40Z) - The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
arXiv Detail & Related papers (2024-02-02T20:33:14Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Neuro-Symbolic Recommendation Model based on Logic Query [16.809190067920387]
We propose a neuro-symbolic recommendation model, which transforms the user history interactions into a logic expression.
The logic expressions are then computed based on the modular logic operations of the neural network.
Experiments on three well-known datasets verified that our method performs better compared to state of the art shallow, deep, session, and reasoning models.
arXiv Detail & Related papers (2023-09-14T10:54:48Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.