Ternary Gamma Semirings as a Novel Algebraic Framework for Learnable Symbolic Reasoning
- URL: http://arxiv.org/abs/2511.17728v1
- Date: Fri, 21 Nov 2025 19:26:18 GMT
- Title: Ternary Gamma Semirings as a Novel Algebraic Framework for Learnable Symbolic Reasoning
- Authors: Chandrasekhar Gokavarapu, D. Madhusudhana Rao,
- Abstract summary: Symbolic AI tasks are inherently triadic, including subject-predicate-object relations in knowledge graphs.<n>Existing neural architectures usually approximate these interactions by flattening or factorizing them into binary components.<n>This paper introduces the Neural Ternary Semiring (NTS), a learnable and differentiable algebraic framework.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Binary semirings such as the tropical, log, and probability semirings form a core algebraic tool in classical and modern neural inference systems, supporting tasks like Viterbi decoding, dynamic programming, and probabilistic reasoning. However, these structures rely on a binary multiplication operator and therefore model only pairwise interactions. Many symbolic AI tasks are inherently triadic, including subject-predicate-object relations in knowledge graphs, logical rules involving two premises and one conclusion, and multi-entity dependencies in structured decision processes. Existing neural architectures usually approximate these interactions by flattening or factorizing them into binary components, which weakens inductive structure, distorts relational meaning, and reduces interpretability. This paper introduces the Neural Ternary Semiring (NTS), a learnable and differentiable algebraic framework grounded in the theory of ternary Gamma-semirings. The central idea is to replace the usual binary product with a native ternary operator implemented by neural networks and guided by algebraic regularizers enforcing approximate associativity and distributivity. This construction allows triadic relationships to be represented directly rather than reconstructed from binary interactions. We establish a soundness result showing that, when algebraic violations vanish during training, the learned operator converges to a valid ternary Gamma-semiring. We also outline an evaluation strategy for triadic reasoning tasks such as knowledge-graph completion and rule-based inference. These insights demonstrate that ternary Gamma-semirings provide a mathematically principled and practically effective foundation for learnable symbolic reasoning.
Related papers
- Implementing Tensor Logic: Unifying Datalog and Neural Reasoning via Tensor Contraction [0.0]
Logic, proposed by Domingos, suggests that logical rules and Einstein summation are mathematically equivalent.<n>This paper provides empirical validation of this framework through three experiments.
arXiv Detail & Related papers (2026-01-23T21:38:19Z) - Bhargava Cube--Inspired Quadratic Regularization for Structured Neural Embeddings [0.0]
We present a novel approach to neural representation learning that incorporates algebraic constraints inspired by Bhargava cubes from number theory.<n>Our framework maps input data to constrained 3-dimensional latent spaces where embeddings are regularized to satisfy learned quadratic relationships.<n>We evaluate on MNIST, achieving 99.46% accuracy while producing interpretable 3D embeddings that naturally cluster by digit class.
arXiv Detail & Related papers (2025-12-12T09:05:11Z) - Hybrid Models for Natural Language Reasoning: The Case of Syllogistic Logic [3.421904493396495]
We investigate the logical generalization capabilities of pre-trained large language models (LLMs) using the syllogistic fragment as a benchmark.<n>We propose a hybrid architecture integrating symbolic reasoning with neural computation.<n>Our experiments show that high efficiency is preserved even with relatively small neural components.
arXiv Detail & Related papers (2025-10-10T15:27:29Z) - Why Neural Network Can Discover Symbolic Structures with Gradient-based Training: An Algebraic and Geometric Foundation for Neurosymbolic Reasoning [73.18052192964349]
We develop a theoretical framework that explains how discrete symbolic structures can emerge naturally from continuous neural network training dynamics.<n>By lifting neural parameters to a measure space and modeling training as Wasserstein gradient flow, we show that under geometric constraints, the parameter measure $mu_t$ undergoes two concurrent phenomena.
arXiv Detail & Related papers (2025-06-26T22:40:30Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Representation Equivalent Neural Operators: a Framework for Alias-free
Operator Learning [11.11883703395469]
This research offers a fresh take on neural operators with a framework Representation equivalent Neural Operators (ReNO)
At its core is the concept of operator aliasing, which measures inconsistency between neural operators and their discrete representations.
Our findings detail how aliasing introduces errors when handling different discretizations and grids and loss of crucial continuous structures.
arXiv Detail & Related papers (2023-05-31T14:45:34Z) - Neural-Symbolic Recursive Machine for Systematic Generalization [113.22455566135757]
We introduce the Neural-Symbolic Recursive Machine (NSR), whose core is a Grounded Symbol System (GSS)
NSR integrates neural perception, syntactic parsing, and semantic reasoning.
We evaluate NSR's efficacy across four challenging benchmarks designed to probe systematic generalization capabilities.
arXiv Detail & Related papers (2022-10-04T13:27:38Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Learning Algebraic Recombination for Compositional Generalization [71.78771157219428]
We propose LeAR, an end-to-end neural model to learn algebraic recombination for compositional generalization.
Key insight is to model the semantic parsing task as a homomorphism between a latent syntactic algebra and a semantic algebra.
Experiments on two realistic and comprehensive compositional generalization demonstrate the effectiveness of our model.
arXiv Detail & Related papers (2021-07-14T07:23:46Z) - Relational Reasoning Networks [9.262082274025708]
This paper presents Neuro-symbolic Networks (R2N), a novel end-to-end model that performs reasoning in the latent space of a deep architecture.<n>R2Ns can be applied to purely symbolic or tasks as a neuro-symbolic platform to integrate learning and reasoning in heterogeneous problems.
arXiv Detail & Related papers (2021-06-01T11:02:22Z) - Compositional Processing Emerges in Neural Networks Solving Math
Problems [100.80518350845668]
Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations.
We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings should be composed.
Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
arXiv Detail & Related papers (2021-05-19T07:24:42Z) - A Minimalist Dataset for Systematic Generalization of Perception,
Syntax, and Semantics [131.93113552146195]
We present a new dataset, Handwritten arithmetic with INTegers (HINT), to examine machines' capability of learning generalizable concepts.
In HINT, machines are tasked with learning how concepts are perceived from raw signals such as images.
We undertake extensive experiments with various sequence-to-sequence models, including RNNs, Transformers, and GPT-3.
arXiv Detail & Related papers (2021-03-02T01:32:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.