Deep Symbolic Learning: Discovering Symbols and Rules from Perceptions
- URL: http://arxiv.org/abs/2208.11561v2
- Date: Mon, 24 Apr 2023 15:36:50 GMT
- Title: Deep Symbolic Learning: Discovering Symbols and Rules from Perceptions
- Authors: Alessandro Daniele and Tommaso Campari and Sagar Malhotra and Luciano
Serafini
- Abstract summary: Neuro-Symbolic (NeSy) integration combines symbolic reasoning with Neural Networks (NNs) for tasks requiring perception and reasoning.
Most NeSy systems rely on continuous relaxation of logical knowledge, and no discrete decisions are made within the model pipeline.
We propose a NeSy system that learns NeSy-functions, i.e., the composition of a (set of) perception functions which map continuous data to discrete symbols, and a symbolic function over the set of symbols.
- Score: 69.40242990198
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neuro-Symbolic (NeSy) integration combines symbolic reasoning with Neural
Networks (NNs) for tasks requiring perception and reasoning. Most NeSy systems
rely on continuous relaxation of logical knowledge, and no discrete decisions
are made within the model pipeline. Furthermore, these methods assume that the
symbolic rules are given. In this paper, we propose Deep Symbolic Learning
(DSL), a NeSy system that learns NeSy-functions, i.e., the composition of a
(set of) perception functions which map continuous data to discrete symbols,
and a symbolic function over the set of symbols. DSL learns simultaneously the
perception and symbolic functions while being trained only on their composition
(NeSy-function). The key novelty of DSL is that it can create internal
(interpretable) symbolic representations and map them to perception inputs
within a differentiable NN learning pipeline. The created symbols are
automatically selected to generate symbolic functions that best explain the
data. We provide experimental analysis to substantiate the efficacy of DSL in
simultaneously learning perception and symbolic functions.
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Symbol Correctness in Deep Neural Networks Containing Symbolic Layers [0.0]
We formalize a high-level principle that can guide the design and analysis of NS-DNNs.
We show that symbol correctness is a necessary property for NS-DNN explainability and transfer learning.
arXiv Detail & Related papers (2024-02-06T03:33:50Z) - The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
arXiv Detail & Related papers (2024-02-02T20:33:14Z) - Symbol-LLM: Leverage Language Models for Symbolic System in Visual Human
Activity Reasoning [58.5857133154749]
We propose a new symbolic system with broad-coverage symbols and rational rules.
We leverage the recent advancement of LLMs as an approximation of the two ideal properties.
Our method shows superiority in extensive activity understanding tasks.
arXiv Detail & Related papers (2023-11-29T05:27:14Z) - Emergence of Symbols in Neural Networks for Semantic Understanding and
Communication [8.156761369660096]
We propose a solution to endow neural networks with the ability to create symbols, understand semantics, and achieve communication.
SEA-net generates symbols that dynamically configure the network to perform specific tasks.
These symbols capture compositional semantic information that allows the system to acquire new functions purely by symbolic manipulation or communication.
arXiv Detail & Related papers (2023-04-13T10:13:00Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - pix2rule: End-to-end Neuro-symbolic Rule Learning [84.76439511271711]
This paper presents a complete neuro-symbolic method for processing images into objects, learning relations and logical rules.
The main contribution is a differentiable layer in a deep learning architecture from which symbolic relations and rules can be extracted.
We demonstrate that our model scales beyond state-of-the-art symbolic learners and outperforms deep relational neural network architectures.
arXiv Detail & Related papers (2021-06-14T15:19:06Z) - Logic Tensor Networks [9.004005678155023]
We present Logic Networks (LTN), a neurosymbolic formalism and computational model that supports learning and reasoning.
We show that LTN provides a uniform language for the specification and the computation of several AI tasks.
arXiv Detail & Related papers (2020-12-25T22:30:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.