Adversarial Learning to Reason in an Arbitrary Logic
- URL: http://arxiv.org/abs/2204.02737v1
- Date: Wed, 6 Apr 2022 11:25:19 GMT
- Title: Adversarial Learning to Reason in an Arbitrary Logic
- Authors: Stanis{\l}aw J. Purga{\l} and Cezary Kaliszyk
- Abstract summary: Existing approaches to learning to prove theorems focus on particular logics and datasets.
We propose Monte-Carlo simulations guided by reinforcement learning that can work in an arbitrarily specified logic.
- Score: 5.076419064097733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing approaches to learning to prove theorems focus on particular logics
and datasets. In this work, we propose Monte-Carlo simulations guided by
reinforcement learning that can work in an arbitrarily specified logic, without
any human knowledge or set of problems. Since the algorithm does not need any
training dataset, it is able to learn to work with any logical foundation, even
when there is no body of proofs or even conjectures available. We practically
demonstrate the feasibility of the approach in multiple logical systems. The
approach is stronger than training on randomly generated data but weaker than
the approaches trained on tailored axiom and conjecture sets. It however allows
us to apply machine learning to automated theorem proving for many logics,
where no such attempts have been tried to date, such as intuitionistic logic or
linear logic.
Related papers
- Improving Complex Reasoning over Knowledge Graph with Logic-Aware Curriculum Tuning [89.89857766491475]
We propose a complex reasoning schema over KG upon large language models (LLMs)
We augment the arbitrary first-order logical queries via binary tree decomposition to stimulate the reasoning capability of LLMs.
Experiments across widely used datasets demonstrate that LACT has substantial improvements(brings an average +5.5% MRR score) over advanced methods.
arXiv Detail & Related papers (2024-05-02T18:12:08Z) - LogicBench: Towards Systematic Evaluation of Logical Reasoning Ability of Large Language Models [52.03659714625452]
Recently developed large language models (LLMs) have been shown to perform remarkably well on a wide range of language understanding tasks.
But, can they really "reason" over the natural language?
This question has been receiving significant research attention and many reasoning skills such as commonsense, numerical, and qualitative have been studied.
arXiv Detail & Related papers (2024-04-23T21:08:49Z) - Learning Guided Automated Reasoning: A Brief Survey [5.607616497275423]
We provide an overview of several automated reasoning and theorem proving domains and the learning and AI methods that have been so far developed for them.
These include premise selection, proof guidance in several settings, feedback loops iterating between reasoning and learning, and symbolic classification problems.
arXiv Detail & Related papers (2024-03-06T19:59:17Z) - Empower Nested Boolean Logic via Self-Supervised Curriculum Learning [67.46052028752327]
We find that any pre-trained language models even including large language models only behave like a random selector in the face of multi-nested logic.
To empower language models with this fundamental capability, this paper proposes a new self-supervised learning method textitCurriculum Logical Reasoning (textscClr)
arXiv Detail & Related papers (2023-10-09T06:54:02Z) - Connecting Proof Theory and Knowledge Representation: Sequent Calculi
and the Chase with Existential Rules [1.8275108630751844]
We show that the chase mechanism in the context of existential rules is in essence the same as proof-search in an extension of Gentzen's sequent calculus for first-order logic.
We formally connect a central tool for establishing decidability proof-theoretically with a central decidability tool in the context of knowledge representation.
arXiv Detail & Related papers (2023-06-05T01:10:23Z) - A Simple Generative Model of Logical Reasoning and Statistical Learning [0.6853165736531939]
Statistical learning and logical reasoning are two major fields of AI expected to be unified for human-like machine intelligence.
We here propose a simple Bayesian model of logical reasoning and statistical learning.
We simply model how data causes symbolic knowledge in terms of its satisfiability in formal logic.
arXiv Detail & Related papers (2023-05-18T16:34:51Z) - Logical Credal Networks [87.25387518070411]
This paper introduces Logical Credal Networks, an expressive probabilistic logic that generalizes many prior models that combine logic and probability.
We investigate its performance on maximum a posteriori inference tasks, including solving Mastermind games with uncertainty and detecting credit card fraud.
arXiv Detail & Related papers (2021-09-25T00:00:47Z) - RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs [91.71504177786792]
This paper studies learning logic rules for reasoning on knowledge graphs.
Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks.
Existing methods either suffer from the problem of searching in a large search space or ineffective optimization due to sparse rewards.
arXiv Detail & Related papers (2020-10-08T14:47:02Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z) - Symbolic Logic meets Machine Learning: A Brief Survey in Infinite
Domains [12.47276164048813]
Tension between deduction and induction is perhaps the most fundamental issue in areas such as philosophy, cognition and artificial intelligence.
We report on results that challenge the view on the limitations of logic, and expose the role that logic can play for learning in infinite domains.
arXiv Detail & Related papers (2020-06-15T15:29:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.