Enhancing Symbolic Machine Learning by Subsymbolic Representations
- URL: http://arxiv.org/abs/2506.14569v1
- Date: Tue, 17 Jun 2025 14:26:21 GMT
- Title: Enhancing Symbolic Machine Learning by Subsymbolic Representations
- Authors: Stephen Roth, Lennart Baur, Derian Boer, Stefan Kramer,
- Abstract summary: We propose to enhance symbolic machine learning schemes by giving them access to neural embeddings.<n>In experiments in three real-world domain, we show that this simple, yet effective, approach outperforms all other baseline methods in terms of the F1 score.
- Score: 2.4280350854512673
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The goal of neuro-symbolic AI is to integrate symbolic and subsymbolic AI approaches, to overcome the limitations of either. Prominent systems include Logic Tensor Networks (LTN) or DeepProbLog, which offer neural predicates and end-to-end learning. The versatility of systems like LTNs and DeepProbLog, however, makes them less efficient in simpler settings, for instance, for discriminative machine learning, in particular in domains with many constants. Therefore, we follow a different approach: We propose to enhance symbolic machine learning schemes by giving them access to neural embeddings. In the present paper, we show this for TILDE and embeddings of constants used by TILDE in similarity predicates. The approach can be fine-tuned by further refining the embeddings depending on the symbolic theory. In experiments in three real-world domain, we show that this simple, yet effective, approach outperforms all other baseline methods in terms of the F1 score. The approach could be useful beyond this setting: Enhancing symbolic learners in this way could be extended to similarities between instances (effectively working like kernels within a logical language), for analogical reasoning, or for propositionalization.
Related papers
- Differentiable Logic Programming for Distant Supervision [4.820391833117535]
We introduce a new method for integrating neural networks with logic programming in Neural-Symbolic AI (NeSy)
Unlike prior methods, our approach does not depend on symbolic solvers for reasoning about missing labels.
This method facilitates more efficient learning under distant supervision.
arXiv Detail & Related papers (2024-08-22T17:55:52Z) - logLTN: Differentiable Fuzzy Logic in the Logarithm Space [11.440949097704943]
A trend in the literature involves integrating axioms and facts in loss functions by grounding logical symbols with neural networks and fuzzy semantics.
This paper presents a configuration of fuzzy operators for grounding formulas end-to-end in the logarithm space.
Our findings, both formal and empirical, show that the proposed configuration outperforms the state-of-the-art.
arXiv Detail & Related papers (2023-06-26T09:39:05Z) - Symbolic Visual Reinforcement Learning: A Scalable Framework with
Object-Level Abstraction and Differentiable Expression Search [63.3745291252038]
We propose DiffSES, a novel symbolic learning approach that discovers discrete symbolic policies.
By using object-level abstractions instead of raw pixel-level inputs, DiffSES is able to leverage the simplicity and scalability advantages of symbolic expressions.
Our experiments demonstrate that DiffSES is able to generate symbolic policies that are simpler and more scalable than state-of-the-art symbolic RL methods.
arXiv Detail & Related papers (2022-12-30T17:50:54Z) - Join-Chain Network: A Logical Reasoning View of the Multi-head Attention
in Transformer [59.73454783958702]
We propose a symbolic reasoning architecture that chains many join operators together to model output logical expressions.
In particular, we demonstrate that such an ensemble of join-chains can express a broad subset of ''tree-structured'' first-order logical expressions, named FOET.
We find that the widely used multi-head self-attention module in transformer can be understood as a special neural operator that implements the union bound of the join operator in probabilistic predicate space.
arXiv Detail & Related papers (2022-10-06T07:39:58Z) - PROTOtypical Logic Tensor Networks (PROTO-LTN) for Zero Shot Learning [2.236663830879273]
Logic Networks (LTNs) are neuro-symbolic systems based on a differentiable, first-order logic grounded into a deep neural network.
We focus here on the subsumption or textttisOfClass predicate, which is fundamental to encode most semantic image interpretation tasks.
We propose a common textttisOfClass predicate, whose level of truth is a function of the distance between an object embedding and the corresponding class prototype.
arXiv Detail & Related papers (2022-06-26T18:34:07Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - Learning with Holographic Reduced Representations [28.462635977110413]
Holographic Reduced Representations (HRR) are a method for performing symbolic AI on top of real-valued vectors.
This paper revisits this approach to understand if it is viable for enabling a hybrid neural-symbolic approach to learning.
arXiv Detail & Related papers (2021-09-05T19:37:34Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Logic Tensor Networks [9.004005678155023]
We present Logic Networks (LTN), a neurosymbolic formalism and computational model that supports learning and reasoning.
We show that LTN provides a uniform language for the specification and the computation of several AI tasks.
arXiv Detail & Related papers (2020-12-25T22:30:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.