From Statistical Relational to Neurosymbolic Artificial Intelligence: a
Survey
- URL: http://arxiv.org/abs/2108.11451v4
- Date: Tue, 2 Jan 2024 07:55:58 GMT
- Title: From Statistical Relational to Neurosymbolic Artificial Intelligence: a
Survey
- Authors: Giuseppe Marra and Sebastijan Duman\v{c}i\'c and Robin Manhaeve and
Luc De Raedt
- Abstract summary: This survey explores the integration of learning and reasoning in two different fields of artificial intelligence: neurosymbolic and statistical relational artificial intelligence.
It identifies seven shared dimensions between these two subfields of AI.
By positioning various NeSy and StarAI systems along these dimensions and pointing out similarities and differences between them, this survey contributes fundamental concepts for understanding the integration of learning and reasoning.
- Score: 16.080532260468594
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This survey explores the integration of learning and reasoning in two
different fields of artificial intelligence: neurosymbolic and statistical
relational artificial intelligence. Neurosymbolic artificial intelligence
(NeSy) studies the integration of symbolic reasoning and neural networks, while
statistical relational artificial intelligence (StarAI) focuses on integrating
logic with probabilistic graphical models. This survey identifies seven shared
dimensions between these two subfields of AI. These dimensions can be used to
characterize different NeSy and StarAI systems. They are concerned with (1) the
approach to logical inference, whether model or proof-based; (2) the syntax of
the used logical theories; (3) the logical semantics of the systems and their
extensions to facilitate learning; (4) the scope of learning, encompassing
either parameter or structure learning; (5) the presence of symbolic and
subsymbolic representations; (6) the degree to which systems capture the
original logic, probabilistic, and neural paradigms; and (7) the classes of
learning tasks the systems are applied to. By positioning various NeSy and
StarAI systems along these dimensions and pointing out similarities and
differences between them, this survey contributes fundamental concepts for
understanding the integration of learning and reasoning.
Related papers
- Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - Knowledge-based Analogical Reasoning in Neuro-symbolic Latent Spaces [20.260546238369205]
We propose a framework that combines the pattern recognition abilities of neural networks with symbolic reasoning and background knowledge.
We take inspiration from the 'neural algorithmic reasoning' approach [DeepMind 2020] and use problem-specific background knowledge.
We test this on visual analogy problems in RAVENs Progressive Matrices, and achieve accuracy competitive with human performance.
arXiv Detail & Related papers (2022-09-19T04:03:20Z) - Improving Coherence and Consistency in Neural Sequence Models with
Dual-System, Neuro-Symbolic Reasoning [49.6928533575956]
We use neural inference to mediate between the neural System 1 and the logical System 2.
Results in robust story generation and grounded instruction-following show that this approach can increase the coherence and accuracy of neurally-based generations.
arXiv Detail & Related papers (2021-07-06T17:59:49Z) - Logic Tensor Networks [9.004005678155023]
We present Logic Networks (LTN), a neurosymbolic formalism and computational model that supports learning and reasoning.
We show that LTN provides a uniform language for the specification and the computation of several AI tasks.
arXiv Detail & Related papers (2020-12-25T22:30:18Z) - Neural-Symbolic Relational Reasoning on Graph Models: Effective Link
Inference and Computation from Knowledge Bases [0.5669790037378094]
We propose a neural-symbolic graph which applies learning over all the paths by feeding the model with the embedding of the minimal network of the knowledge graph containing such paths.
By learning to produce representations for entities and facts corresponding to word embeddings, we show how the model can be trained end-to-end to decode these representations and infer relations between entities in a relational approach.
arXiv Detail & Related papers (2020-05-05T22:46:39Z) - From Statistical Relational to Neuro-Symbolic Artificial Intelligence [15.092816085610513]
This survey identifies several parallels across seven different dimensions between these two fields.
These cannot only be used to characterize and position neuro-symbolic artificial intelligence approaches but also to identify a number of directions for further research.
arXiv Detail & Related papers (2020-03-18T16:15:46Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.