A Semantic Framework for Neuro-Symbolic Computing
- URL: http://arxiv.org/abs/2212.12050v5
- Date: Wed, 27 Nov 2024 00:22:09 GMT
- Title: A Semantic Framework for Neuro-Symbolic Computing
- Authors: Simon Odense, Artur d'Avila Garcez,
- Abstract summary: The field of neuro-symbolic AI aims to benefit from the combination of neural networks and symbolic systems.
No common definition of encoding exists that can enable a precise, theoretical comparison of neuro-symbolic methods.
This paper addresses this problem by introducing a semantic framework for neuro-symbolic AI.
- Score: 0.36832029288386137
- License:
- Abstract: The field of neuro-symbolic AI aims to benefit from the combination of neural networks and symbolic systems. A cornerstone of the field is the translation or encoding of symbolic knowledge into neural networks. Although many neuro-symbolic methods and approaches have been proposed, and with a large increase in recent years, no common definition of encoding exists that can enable a precise, theoretical comparison of neuro-symbolic methods. This paper addresses this problem by introducing a semantic framework for neuro-symbolic AI. We start by providing a formal definition of semantic encoding, specifying the components and conditions under which a knowledge-base can be encoded correctly by a neural network. We then show that many neuro-symbolic approaches are accounted for by this definition. We provide a number of examples and correspondence proofs applying the proposed framework to the neural encoding of various forms of knowledge representation. Many, at first sight disparate, neuro-symbolic methods, are shown to fall within the proposed formalization. This is expected to provide guidance to future neuro-symbolic encodings by placing them in the broader context of semantic encodings of entire families of existing neuro-symbolic systems. The paper hopes to help initiate a discussion around the provision of a theory for neuro-symbolic AI and a semantics for deep learning.
Related papers
- Compositional Generalization Across Distributional Shifts with Sparse Tree Operations [77.5742801509364]
We introduce a unified neurosymbolic architecture called the Differentiable Tree Machine.
We significantly increase the model's efficiency through the use of sparse vector representations of symbolic structures.
We enable its application beyond the restricted set of tree2tree problems to the more general class of seq2seq problems.
arXiv Detail & Related papers (2024-12-18T17:20:19Z) - Shadow of the (Hierarchical) Tree: Reconciling Symbolic and Predictive Components of the Neural Code for Syntax [1.223779595809275]
I discuss the prospects of reconciling the neural code for hierarchical'vertical' syntax with linear and predictive 'horizontal' processes.
I provide a neurosymbolic mathematical model for how to inject symbolic representations into a neural regime encoding lexico-semantic statistical features.
arXiv Detail & Related papers (2024-12-02T08:44:16Z) - Closed-Form Interpretation of Neural Network Latent Spaces with Symbolic Gradients [0.0]
We introduce a framework for finding closed-form interpretations of neurons in latent spaces of artificial neural networks.
The interpretation framework is based on embedding trained neural networks into an equivalence class of functions that encode the same concept.
arXiv Detail & Related papers (2024-09-09T03:26:07Z) - A short Survey: Exploring knowledge graph-based neural-symbolic system from application perspective [0.0]
achieving human-like reasoning and interpretability in AI systems remains a substantial challenge.
The Neural-Symbolic paradigm, which integrates neural networks with symbolic systems, presents a promising pathway toward more interpretable AI.
This paper explores recent advancements in neural-symbolic integration based on Knowledge Graphs.
arXiv Detail & Related papers (2024-05-06T14:40:50Z) - Symbol Correctness in Deep Neural Networks Containing Symbolic Layers [0.0]
We formalize a high-level principle that can guide the design and analysis of NS-DNNs.
We show that symbol correctness is a necessary property for NS-DNN explainability and transfer learning.
arXiv Detail & Related papers (2024-02-06T03:33:50Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Extensions to Generalized Annotated Logic and an Equivalent Neural
Architecture [4.855957436171202]
We propose a list of desirable criteria for neuro symbolic systems and examine how some of the existing approaches address these criteria.
We then propose an extension to annotated generalized logic that allows for the creation of an equivalent neural architecture.
Unlike previous approaches that rely on continuous optimization for the training process, our framework is designed as a binarized neural network that uses discrete optimization.
arXiv Detail & Related papers (2023-02-23T17:39:46Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Neuromorphic Artificial Intelligence Systems [58.1806704582023]
Modern AI systems, based on von Neumann architecture and classical neural networks, have a number of fundamental limitations in comparison with the brain.
This article discusses such limitations and the ways they can be mitigated.
It presents an overview of currently available neuromorphic AI projects in which these limitations are overcome.
arXiv Detail & Related papers (2022-05-25T20:16:05Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.