Symbolic Logic meets Machine Learning: A Brief Survey in Infinite
Domains
- URL: http://arxiv.org/abs/2006.08480v1
- Date: Mon, 15 Jun 2020 15:29:49 GMT
- Title: Symbolic Logic meets Machine Learning: A Brief Survey in Infinite
Domains
- Authors: Vaishak Belle
- Abstract summary: Tension between deduction and induction is perhaps the most fundamental issue in areas such as philosophy, cognition and artificial intelligence.
We report on results that challenge the view on the limitations of logic, and expose the role that logic can play for learning in infinite domains.
- Score: 12.47276164048813
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The tension between deduction and induction is perhaps the most fundamental
issue in areas such as philosophy, cognition and artificial intelligence (AI).
The deduction camp concerns itself with questions about the expressiveness of
formal languages for capturing knowledge about the world, together with proof
systems for reasoning from such knowledge bases. The learning camp attempts to
generalize from examples about partial descriptions about the world. In AI,
historically, these camps have loosely divided the development of the field,
but advances in cross-over areas such as statistical relational learning,
neuro-symbolic systems, and high-level control have illustrated that the
dichotomy is not very constructive, and perhaps even ill-formed. In this
article, we survey work that provides further evidence for the connections
between logic and learning. Our narrative is structured in terms of three
strands: logic versus learning, machine learning for logic, and logic for
machine learning, but naturally, there is considerable overlap. We place an
emphasis on the following "sore" point: there is a common misconception that
logic is for discrete properties, whereas probability theory and machine
learning, more generally, is for continuous properties. We report on results
that challenge this view on the limitations of logic, and expose the role that
logic can play for learning in infinite domains.
Related papers
- Disentangling Logic: The Role of Context in Large Language Model Reasoning Capabilities [31.728976421529577]
We investigate the contrast across abstract and contextualized logical problems from a comprehensive set of domains.
We focus on standard propositional logic, specifically propositional deductive and abductive logic reasoning.
Our experiments aim to provide insights into disentangling context in logical reasoning and the true reasoning capabilities of LLMs.
arXiv Detail & Related papers (2024-06-04T21:25:06Z) - Three Pathways to Neurosymbolic Reinforcement Learning with
Interpretable Model and Policy Networks [4.242435932138821]
We study a class of neural networks that build interpretable semantics directly into their architecture.
We reveal and highlight both the potential and the essential difficulties of combining logic, simulation, and learning.
arXiv Detail & Related papers (2024-02-07T23:00:24Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - Statistical relational learning and neuro-symbolic AI: what does
first-order logic offer? [12.47276164048813]
Our aim is to briefly survey and articulate the logical and philosophical foundations of using (first-order) logic to represent (probabilistic) knowledge in a non-technical fashion.
For machine learning researchers unaware of why the research community cares about relational representations, this article can serve as a gentle introduction.
For logical experts who are newcomers to the learning area, such an article can help in navigating the differences between finite vs infinite, and subjective probabilities vs random-world semantics.
arXiv Detail & Related papers (2023-06-08T12:34:31Z) - Large Language Models are In-Context Semantic Reasoners rather than
Symbolic Reasoners [75.85554779782048]
Large Language Models (LLMs) have excited the natural language and machine learning community over recent years.
Despite of numerous successful applications, the underlying mechanism of such in-context capabilities still remains unclear.
In this work, we hypothesize that the learned textitsemantics of language tokens do the most heavy lifting during the reasoning process.
arXiv Detail & Related papers (2023-05-24T07:33:34Z) - Discourse-Aware Graph Networks for Textual Logical Reasoning [142.0097357999134]
Passage-level logical relations represent entailment or contradiction between propositional units (e.g., a concluding sentence)
We propose logic structural-constraint modeling to solve the logical reasoning QA and introduce discourse-aware graph networks (DAGNs)
The networks first construct logic graphs leveraging in-line discourse connectives and generic logic theories, then learn logic representations by end-to-end evolving the logic relations with an edge-reasoning mechanism and updating the graph features.
arXiv Detail & Related papers (2022-07-04T14:38:49Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z) - Three Modern Roles for Logic in AI [11.358487655918676]
We consider three modern roles for logic in artificial intelligence.
These include computation, learning from a combination of data and knowledge, and reasoning about the behavior of machine learning systems.
arXiv Detail & Related papers (2020-04-18T11:51:13Z) - SMT + ILP [12.47276164048813]
We motivate a reconsideration of inductive declarative programming by leveraging satisfiability modulo theory technology.
In this position paper, we motivate a reconsideration of inductive declarative programming by leveraging satisfiability modulo theory technology.
arXiv Detail & Related papers (2020-01-15T10:09:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.