Implementing Tensor Logic: Unifying Datalog and Neural Reasoning via Tensor Contraction
- URL: http://arxiv.org/abs/2601.17188v1
- Date: Fri, 23 Jan 2026 21:38:19 GMT
- Title: Implementing Tensor Logic: Unifying Datalog and Neural Reasoning via Tensor Contraction
- Authors: Swapn Shah, Wlodek Zadrozny,
- Abstract summary: Logic, proposed by Domingos, suggests that logical rules and Einstein summation are mathematically equivalent.<n>This paper provides empirical validation of this framework through three experiments.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The unification of symbolic reasoning and neural networks remains a central challenge in artificial intelligence. Symbolic systems offer reliability and interpretability but lack scalability, while neural networks provide learning capabilities but sacrifice transparency. Tensor Logic, proposed by Domingos, suggests that logical rules and Einstein summation are mathematically equivalent, offering a principled path toward unification. This paper provides empirical validation of this framework through three experiments. First, we demonstrate the equivalence between recursive Datalog rules and iterative tensor contractions by computing the transitive closure of a biblical genealogy graph containing 1,972 individuals and 1,727 parent-child relationships, converging in 74 iterations to discover 33,945 ancestor relationships. Second, we implement reasoning in embedding space by training a neural network with learnable transformation matrices, demonstrating successful zero-shot compositional inference on held-out queries. Third, we validate the Tensor Logic superposition construction on FB15k-237, a large-scale knowledge graph with 14,541 entities and 237 relations. Using Domingos's relation matrix formulation $R_r = E^\top A_r E$, we achieve MRR of 0.3068 on standard link prediction and MRR of 0.3346 on a compositional reasoning benchmark where direct edges are removed during training, demonstrating that matrix composition enables multi-hop inference without direct training examples.
Related papers
- Distilling Formal Logic into Neural Spaces: A Kernel Alignment Approach for Signal Temporal Logic [6.419602857618508]
We introduce a framework for learning continuous neural representations of formal specifications.<n>We distill a symbolic robustness kernel into a Transformer encoder.<n>The encoder produces embeddings in a single forward pass, effectively mimicking the kernel's logic at a fraction of its computational cost.
arXiv Detail & Related papers (2026-03-05T14:08:25Z) - Ternary Gamma Semirings as a Novel Algebraic Framework for Learnable Symbolic Reasoning [0.0]
Symbolic AI tasks are inherently triadic, including subject-predicate-object relations in knowledge graphs.<n>Existing neural architectures usually approximate these interactions by flattening or factorizing them into binary components.<n>This paper introduces the Neural Ternary Semiring (NTS), a learnable and differentiable algebraic framework.
arXiv Detail & Related papers (2025-11-21T19:26:18Z) - SRNN: Spatiotemporal Relational Neural Network for Intuitive Physics Understanding [5.9229807497571665]
This paper introduces the Spatiotemporal Network (SRNN), a model that establishes a unified representation for neural object attributes, relations and timeline.<n>On the CLEVR benchmark, SRNN achieves competitive performance, thereby confirming its capability to represent essential language relations from the visual stream.<n>Our work provides a proof-of-concept that confirms the viability of translating key neural intelligence into engineered systems for intuitive physics understanding in constrained environments.
arXiv Detail & Related papers (2025-11-10T06:43:42Z) - The Features at Convergence Theorem: a first-principles alternative to the Neural Feature Ansatz for how networks learn representations [16.67524623230699]
A leading approach is the Neural Feature Ansatz (NFA), a conjectured mechanism for how feature learning occurs.<n>Although the NFA is empirically validated, it is an educated guess and lacks a theoretical basis.<n>We take a first-principles approach to understanding why this observation holds, and when it does not.
arXiv Detail & Related papers (2025-07-08T03:52:48Z) - Why Neural Network Can Discover Symbolic Structures with Gradient-based Training: An Algebraic and Geometric Foundation for Neurosymbolic Reasoning [73.18052192964349]
We develop a theoretical framework that explains how discrete symbolic structures can emerge naturally from continuous neural network training dynamics.<n>By lifting neural parameters to a measure space and modeling training as Wasserstein gradient flow, we show that under geometric constraints, the parameter measure $mu_t$ undergoes two concurrent phenomena.
arXiv Detail & Related papers (2025-06-26T22:40:30Z) - TabVer: Tabular Fact Verification with Natural Logic [11.002475880349452]
We propose a set-theoretic interpretation of numerals and arithmetic functions in the context of natural logic.
We leverage large language models to generate arithmetic expressions by generating questions about salient parts of a claim which are answered by executing functions on tables.
In a few-shot setting on FEVEROUS, we achieve an accuracy of 71.4, outperforming both fully neural and symbolic reasoning models by 3.4 points.
arXiv Detail & Related papers (2024-11-02T00:36:34Z) - LINC: A Neurosymbolic Approach for Logical Reasoning by Combining
Language Models with First-Order Logic Provers [60.009969929857704]
Logical reasoning is an important task for artificial intelligence with potential impacts on science, mathematics, and society.
In this work, we reformulating such tasks as modular neurosymbolic programming, which we call LINC.
We observe significant performance gains on FOLIO and a balanced subset of ProofWriter for three different models in nearly all experimental conditions we evaluate.
arXiv Detail & Related papers (2023-10-23T17:58:40Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Neural networks trained with SGD learn distributions of increasing
complexity [78.30235086565388]
We show that neural networks trained using gradient descent initially classify their inputs using lower-order input statistics.
We then exploit higher-order statistics only later during training.
We discuss the relation of DSB to other simplicity biases and consider its implications for the principle of universality in learning.
arXiv Detail & Related papers (2022-11-21T15:27:22Z) - Neural Proof Nets [0.8379286663107844]
We propose a neural variant of proof nets based on Sinkhorn networks, which allows us to translate parsing as the problem of extracting primitive primitive permuting them into alignment.
We test our approach on AEThel, where it manages to correctly transcribe raw text sentences into proofs and terms of the linear lambda-calculus with an accuracy of as high as 70%.
arXiv Detail & Related papers (2020-09-26T22:48:47Z) - Closed Loop Neural-Symbolic Learning via Integrating Neural Perception,
Grammar Parsing, and Symbolic Reasoning [134.77207192945053]
Prior methods learn the neural-symbolic models using reinforcement learning approaches.
We introduce the textbfgrammar model as a textitsymbolic prior to bridge neural perception and symbolic reasoning.
We propose a novel textbfback-search algorithm which mimics the top-down human-like learning procedure to propagate the error.
arXiv Detail & Related papers (2020-06-11T17:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.