Statistical relational learning and neuro-symbolic AI: what does
first-order logic offer?
- URL: http://arxiv.org/abs/2306.13660v1
- Date: Thu, 8 Jun 2023 12:34:31 GMT
- Title: Statistical relational learning and neuro-symbolic AI: what does
first-order logic offer?
- Authors: Vaishak Belle
- Abstract summary: Our aim is to briefly survey and articulate the logical and philosophical foundations of using (first-order) logic to represent (probabilistic) knowledge in a non-technical fashion.
For machine learning researchers unaware of why the research community cares about relational representations, this article can serve as a gentle introduction.
For logical experts who are newcomers to the learning area, such an article can help in navigating the differences between finite vs infinite, and subjective probabilities vs random-world semantics.
- Score: 12.47276164048813
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, our aim is to briefly survey and articulate the logical and
philosophical foundations of using (first-order) logic to represent
(probabilistic) knowledge in a non-technical fashion. Our motivation is three
fold. First, for machine learning researchers unaware of why the research
community cares about relational representations, this article can serve as a
gentle introduction. Second, for logical experts who are newcomers to the
learning area, such an article can help in navigating the differences between
finite vs infinite, and subjective probabilities vs random-world semantics.
Finally, for researchers from statistical relational learning and
neuro-symbolic AI, who are usually embedded in finite worlds with subjective
probabilities, appreciating what infinite domains and random-world semantics
brings to the table is of utmost theoretical import.
Related papers
- Machine learning and information theory concepts towards an AI
Mathematician [77.63761356203105]
The current state-of-the-art in artificial intelligence is impressive, especially in terms of mastery of language, but not so much in terms of mathematical reasoning.
This essay builds on the idea that current deep learning mostly succeeds at system 1 abilities.
It takes an information-theoretical posture to ask questions about what constitutes an interesting mathematical statement.
arXiv Detail & Related papers (2024-03-07T15:12:06Z) - Three Pathways to Neurosymbolic Reinforcement Learning with
Interpretable Model and Policy Networks [4.242435932138821]
We study a class of neural networks that build interpretable semantics directly into their architecture.
We reveal and highlight both the potential and the essential difficulties of combining logic, simulation, and learning.
arXiv Detail & Related papers (2024-02-07T23:00:24Z) - LINC: A Neurosymbolic Approach for Logical Reasoning by Combining
Language Models with First-Order Logic Provers [60.009969929857704]
Logical reasoning is an important task for artificial intelligence with potential impacts on science, mathematics, and society.
In this work, we reformulating such tasks as modular neurosymbolic programming, which we call LINC.
We observe significant performance gains on FOLIO and a balanced subset of ProofWriter for three different models in nearly all experimental conditions we evaluate.
arXiv Detail & Related papers (2023-10-23T17:58:40Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - A Simple Generative Model of Logical Reasoning and Statistical Learning [0.6853165736531939]
Statistical learning and logical reasoning are two major fields of AI expected to be unified for human-like machine intelligence.
We here propose a simple Bayesian model of logical reasoning and statistical learning.
We simply model how data causes symbolic knowledge in terms of its satisfiability in formal logic.
arXiv Detail & Related papers (2023-05-18T16:34:51Z) - Neuro-Symbolic Forward Reasoning [19.417231973682366]
Neuro-Symbolic Forward Reasoner (NSFR) is a new approach for reasoning tasks taking advantage of differentiable forward-chaining using first-order logic.
The key idea is to combine differentiable forward-chaining reasoning with object-centric (deep) learning.
arXiv Detail & Related papers (2021-10-18T15:14:58Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z) - Logic, Probability and Action: A Situation Calculus Perspective [12.47276164048813]
The unification of logic and probability is a long-standing concern in AI.
We explore recent results pertaining to the integration of logic, probability and actions in the situation calculus.
Results are motivated in the context of cognitive robotics.
arXiv Detail & Related papers (2020-06-17T13:49:53Z) - Symbolic Logic meets Machine Learning: A Brief Survey in Infinite
Domains [12.47276164048813]
Tension between deduction and induction is perhaps the most fundamental issue in areas such as philosophy, cognition and artificial intelligence.
We report on results that challenge the view on the limitations of logic, and expose the role that logic can play for learning in infinite domains.
arXiv Detail & Related papers (2020-06-15T15:29:49Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.