DeepProofLog: Efficient Proving in Deep Stochastic Logic Programs
- URL: http://arxiv.org/abs/2511.08581v1
- Date: Wed, 12 Nov 2025 02:05:47 GMT
- Title: DeepProofLog: Efficient Proving in Deep Stochastic Logic Programs
- Authors: Ying Jiao, Rodrigo Castellano Ontiveros, Luc De Raedt, Marco Gori, Francesco Giannini, Michelangelo Diligenti, Giuseppe Marra,
- Abstract summary: This paper introduces DeepProofLog, a novel NeSy system based on logic programs.<n>DPrL parameterizes all derivation steps with neural networks, allowing efficient neural guidance over the proving system.<n>Our experiments on standard NeSy benchmarks and knowledge graph reasoning tasks demonstrate that DPrL outperforms existing state-of-the-art NeSy systems.
- Score: 30.18197334181211
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neurosymbolic (NeSy) AI aims to combine the strengths of neural architectures and symbolic reasoning to improve the accuracy, interpretability, and generalization capability of AI models. While logic inference on top of subsymbolic modules has been shown to effectively guarantee these properties, this often comes at the cost of reduced scalability, which can severely limit the usability of NeSy models. This paper introduces DeepProofLog (DPrL), a novel NeSy system based on stochastic logic programs, which addresses the scalability limitations of previous methods. DPrL parameterizes all derivation steps with neural networks, allowing efficient neural guidance over the proving system. Additionally, we establish a formal mapping between the resolution process of our deep stochastic logic programs and Markov Decision Processes, enabling the application of dynamic programming and reinforcement learning techniques for efficient inference and learning. This theoretical connection improves scalability for complex proof spaces and large knowledge bases. Our experiments on standard NeSy benchmarks and knowledge graph reasoning tasks demonstrate that DPrL outperforms existing state-of-the-art NeSy systems, advancing scalability to larger and more complex settings than previously possible.
Related papers
- WARP Logic Neural Networks [0.0]
We introduce WAlsh Relaxation for Probabilistic (WARP) logic neural networks.<n>WARP is a gradient-based framework that efficiently learns combinations of hardware-native logic blocks.<n>We show that WARP yields the most parameter-efficient representation for exactly learning Boolean functions.
arXiv Detail & Related papers (2026-02-03T13:46:51Z) - NeSyPr: Neurosymbolic Proceduralization For Efficient Embodied Reasoning [21.685443540926652]
NeSyPr is a novel embodied reasoning framework that compiles knowledge via neurosymbolic proceduralization.<n>It supports efficient test-time inference without relying on external symbolic guidance.<n>We evaluate NeSyPr on the embodied benchmarks PDDLGym, VirtualHome, and ALFWorld.
arXiv Detail & Related papers (2025-10-22T09:57:02Z) - On Scaling Neurosymbolic Programming through Guided Logical Inference [1.124958340749622]
We propose a new approach centered around an exact algorithmNL, that enables bypassing the computation of the logical provenance.<n>We show that this approach can be adapted for approximate reasoning with $epsilon$ or $(epsilon, delta)$ guarantees, called ApproxDPNL.
arXiv Detail & Related papers (2025-01-30T08:49:25Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - LightCode: Light Analytical and Neural Codes for Channels with Feedback [10.619569069690185]
We focus on designing low-complexity coding schemes that are interpretable and more suitable for communication systems.
First, we demonstrate that PowerBlast, an analytical coding scheme inspired by Schalkwijk-Kailath (SK) and Gallager-Nakibouglu (GN) schemes, achieves notable reliability improvements over both SK and GN schemes.
Next, to enhance reliability in low-SNR regions, we propose LightCode, a lightweight neural code that achieves state-of-the-art reliability while using a fraction of memory and compute compared to existing deeplearning-based codes.
arXiv Detail & Related papers (2024-03-16T01:04:34Z) - The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
arXiv Detail & Related papers (2024-02-02T20:33:14Z) - A Novel Neural-symbolic System under Statistical Relational Learning [47.30190559449236]
We propose a neural-symbolic framework based on statistical relational learning, referred to as NSF-SRL.<n>Results of symbolic reasoning are utilized to refine and correct the predictions made by deep learning models, while deep learning models enhance the efficiency of the symbolic reasoning process.<n>We believe that this approach sets a new standard for neural-symbolic systems and will drive future research in the field of general artificial intelligence.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
Layer-wise Feedback feedback (LFP) is a novel training principle for neural network-like predictors.<n>LFP decomposes a reward to individual neurons based on their respective contributions.<n>Our method then implements a greedy reinforcing approach helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - Great Truths are Always Simple: A Rather Simple Knowledge Encoder for
Enhancing the Commonsense Reasoning Capacity of Pre-Trained Models [89.98762327725112]
Commonsense reasoning in natural language is a desired ability of artificial intelligent systems.
For solving complex commonsense reasoning tasks, a typical solution is to enhance pre-trained language models(PTMs) with a knowledge-aware graph neural network(GNN) encoder.
Despite the effectiveness, these approaches are built on heavy architectures, and can't clearly explain how external knowledge resources improve the reasoning capacity of PTMs.
arXiv Detail & Related papers (2022-05-04T01:27:36Z) - Latent Space Data Assimilation by using Deep Learning [0.0]
Performing Data Assimilation (DA) at a low cost is of prime concern in Earth system modeling.
We incorporate Deep Learning (DL) methods into a DA framework.
We exploit the latent structure provided by autoencoders (AEs) to design an Ensemble Transform Kalman Filter with model error (ETKF-Q) in the latent space.
arXiv Detail & Related papers (2021-04-01T12:25:55Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.