Learning Guided Automated Reasoning: A Brief Survey
- URL: http://arxiv.org/abs/2403.04017v1
- Date: Wed, 6 Mar 2024 19:59:17 GMT
- Title: Learning Guided Automated Reasoning: A Brief Survey
- Authors: Lasse Blaauwbroek, David Cerna, Thibault Gauthier, Jan Jakub\r{u}v,
Cezary Kaliszyk, Martin Suda, Josef Urban
- Abstract summary: We provide an overview of several automated reasoning and theorem proving domains and the learning and AI methods that have been so far developed for them.
These include premise selection, proof guidance in several settings, feedback loops iterating between reasoning and learning, and symbolic classification problems.
- Score: 5.607616497275423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated theorem provers and formal proof assistants are general reasoning
systems that are in theory capable of proving arbitrarily hard theorems, thus
solving arbitrary problems reducible to mathematics and logical reasoning. In
practice, such systems however face large combinatorial explosion, and
therefore include many heuristics and choice points that considerably influence
their performance. This is an opportunity for trained machine learning
predictors, which can guide the work of such reasoning systems. Conversely,
deductive search supported by the notion of logically valid proof allows one to
train machine learning systems on large reasoning corpora. Such bodies of proof
are usually correct by construction and when combined with more and more
precise trained guidance they can be boostrapped into very large corpora, with
increasingly long reasoning chains and possibly novel proof ideas. In this
paper we provide an overview of several automated reasoning and theorem proving
domains and the learning and AI methods that have been so far developed for
them. These include premise selection, proof guidance in several settings, AI
systems and feedback loops iterating between reasoning and learning, and
symbolic classification problems.
Related papers
- Make LLMs better zero-shot reasoners: Structure-orientated autonomous reasoning [52.83539473110143]
We introduce a novel structure-oriented analysis method to help Large Language Models (LLMs) better understand a question.
To further improve the reliability in complex question-answering tasks, we propose a multi-agent reasoning system, Structure-oriented Autonomous Reasoning Agents (SARA)
Extensive experiments verify the effectiveness of the proposed reasoning system. Surprisingly, in some cases, the system even surpasses few-shot methods.
arXiv Detail & Related papers (2024-10-18T05:30:33Z) - Machine learning and information theory concepts towards an AI
Mathematician [77.63761356203105]
The current state-of-the-art in artificial intelligence is impressive, especially in terms of mastery of language, but not so much in terms of mathematical reasoning.
This essay builds on the idea that current deep learning mostly succeeds at system 1 abilities.
It takes an information-theoretical posture to ask questions about what constitutes an interesting mathematical statement.
arXiv Detail & Related papers (2024-03-07T15:12:06Z) - A Hybrid System for Systematic Generalization in Simple Arithmetic
Problems [70.91780996370326]
We propose a hybrid system capable of solving arithmetic problems that require compositional and systematic reasoning over sequences of symbols.
We show that the proposed system can accurately solve nested arithmetical expressions even when trained only on a subset including the simplest cases.
arXiv Detail & Related papers (2023-06-29T18:35:41Z) - HOP, UNION, GENERATE: Explainable Multi-hop Reasoning without Rationale
Supervision [118.0818807474809]
This work proposes a principled, probabilistic approach for training explainable multi-hop QA systems without rationale supervision.
Our approach performs multi-hop reasoning by explicitly modeling rationales as sets, enabling the model to capture interactions between documents and sentences within a document.
arXiv Detail & Related papers (2023-05-23T16:53:49Z) - LAMBADA: Backward Chaining for Automated Reasoning in Natural Language [11.096348678079574]
Backward Chaining algorithm, called LAMBADA, decomposes reasoning into four sub-modules.
We show that LAMBADA achieves sizable accuracy boosts over state-of-the-art forward reasoning methods.
arXiv Detail & Related papers (2022-12-20T18:06:03Z) - Chaining Simultaneous Thoughts for Numerical Reasoning [92.2007997126144]
numerical reasoning over text should be an essential skill of AI systems.
Previous work focused on modeling the structures of equations, and has proposed various structured decoders.
We propose CANTOR, a numerical reasoner that models reasoning steps using a directed acyclic graph.
arXiv Detail & Related papers (2022-11-29T18:52:06Z) - Language Models Are Greedy Reasoners: A Systematic Formal Analysis of
Chain-of-Thought [10.524051272257614]
Large language models (LLMs) have shown remarkable reasoning capabilities given chain-of-thought prompts.
We present a new synthetic question-answering dataset called PrOntoQA, where each example is generated as a synthetic world model.
This allows us to parse the generated chain-of-thought into symbolic proofs for formal analysis.
arXiv Detail & Related papers (2022-10-03T21:34:32Z) - Adversarial Learning to Reason in an Arbitrary Logic [5.076419064097733]
Existing approaches to learning to prove theorems focus on particular logics and datasets.
We propose Monte-Carlo simulations guided by reinforcement learning that can work in an arbitrarily specified logic.
arXiv Detail & Related papers (2022-04-06T11:25:19Z) - End-to-end Algorithm Synthesis with Recurrent Networks: Logical
Extrapolation Without Overthinking [52.05847268235338]
We show how machine learning systems can perform logical extrapolation without overthinking problems.
We propose a recall architecture that keeps an explicit copy of the problem instance in memory so that it cannot be forgotten.
We also employ a progressive training routine that prevents the model from learning behaviors that are specific to number and instead pushes it to learn behaviors that can be repeated indefinitely.
arXiv Detail & Related papers (2022-02-11T18:43:28Z) - Verification of ML Systems via Reparameterization [6.482926592121413]
We show how a probabilistic program can be automatically represented in a theorem prover.
We also prove that the null model used in a Bayesian hypothesis test satisfies a fairness criterion called demographic parity.
arXiv Detail & Related papers (2020-07-14T02:19:25Z) - Extending Automated Deduction for Commonsense Reasoning [0.0]
The paper argues that the methods and algorithms used by automated reasoners for classical first-order logic can be extended towards commonsense reasoning.
The proposed extensions mostly rely on ordinary proof trees and are devised to handle commonsense knowledge bases containing inconsistencies, default rules, operating on topics, relevance, confidence and similarity measures.
We claim that machine learning is best suited for the construction of commonsense knowledge bases while the extended logic-based methods would be well-suited for actually answering queries from these knowledge bases.
arXiv Detail & Related papers (2020-03-29T23:17:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.