Learning Representations for Sub-Symbolic Reasoning
- URL: http://arxiv.org/abs/2106.00393v1
- Date: Tue, 1 Jun 2021 11:02:22 GMT
- Title: Learning Representations for Sub-Symbolic Reasoning
- Authors: Giuseppe Marra, Michelangelo Diligenti, Francesco Giannini and Marco
Maggini
- Abstract summary: This paper presents a novel end-to-end model that performs intrinsic reasoning in latent deep space of a learner.
Neuro-symbolic methods integrate neural architectures, knowledge representation and reasoning.
The proposed model bridges the gap between previous neuro-symbolic methods that have been either limited in terms of scalability or expressivity.
- Score: 15.064026484896301
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neuro-symbolic methods integrate neural architectures, knowledge
representation and reasoning. However, they have been struggling at both
dealing with the intrinsic uncertainty of the observations and scaling to real
world applications. This paper presents Relational Reasoning Networks (R2N), a
novel end-to-end model that performs relational reasoning in the latent space
of a deep learner architecture, where the representations of constants, ground
atoms and their manipulations are learned in an integrated fashion. Unlike flat
architectures like Knowledge Graph Embedders, which can only represent
relations between entities, R2Ns define an additional computational structure,
accounting for higher-level relations among the ground atoms. The considered
relations can be explicitly known, like the ones defined by logic formulas, or
defined as unconstrained correlations among groups of ground atoms. R2Ns can be
applied to purely symbolic tasks or as a neuro-symbolic platform to integrate
learning and reasoning in heterogeneous problems with both symbolic and
feature-based represented entities. The proposed model bridges the gap between
previous neuro-symbolic methods that have been either limited in terms of
scalability or expressivity. The proposed methodology is shown to achieve
state-of-the-art results in different experimental settings.
Related papers
- Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Weisfeiler and Leman Go Relational [4.29881872550313]
We investigate the limitations in the expressive power of the well-known GCN and Composition GCN architectures.
We introduce the $k$-RN architecture that provably overcomes the limitations of the above two architectures.
arXiv Detail & Related papers (2022-11-30T15:56:46Z) - On Neural Architecture Inductive Biases for Relational Tasks [76.18938462270503]
We introduce a simple architecture based on similarity-distribution scores which we name Compositional Network generalization (CoRelNet)
We find that simple architectural choices can outperform existing models in out-of-distribution generalizations.
arXiv Detail & Related papers (2022-06-09T16:24:01Z) - Universal approximation property of invertible neural networks [76.95927093274392]
Invertible neural networks (INNs) are neural network architectures with invertibility by design.
Thanks to their invertibility and the tractability of Jacobian, INNs have various machine learning applications such as probabilistic modeling, generative modeling, and representation learning.
arXiv Detail & Related papers (2022-04-15T10:45:26Z) - pix2rule: End-to-end Neuro-symbolic Rule Learning [84.76439511271711]
This paper presents a complete neuro-symbolic method for processing images into objects, learning relations and logical rules.
The main contribution is a differentiable layer in a deep learning architecture from which symbolic relations and rules can be extracted.
We demonstrate that our model scales beyond state-of-the-art symbolic learners and outperforms deep relational neural network architectures.
arXiv Detail & Related papers (2021-06-14T15:19:06Z) - Neural-Symbolic Relational Reasoning on Graph Models: Effective Link
Inference and Computation from Knowledge Bases [0.5669790037378094]
We propose a neural-symbolic graph which applies learning over all the paths by feeding the model with the embedding of the minimal network of the knowledge graph containing such paths.
By learning to produce representations for entities and facts corresponding to word embeddings, we show how the model can be trained end-to-end to decode these representations and infer relations between entities in a relational approach.
arXiv Detail & Related papers (2020-05-05T22:46:39Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.