Learning Conjecturing from Scratch
- URL: http://arxiv.org/abs/2503.01389v1
- Date: Mon, 03 Mar 2025 10:39:38 GMT
- Title: Learning Conjecturing from Scratch
- Authors: Thibault Gauthier, Josef Urban,
- Abstract summary: We develop a self-learning approach for conjecturing induction predicates on a dataset of 16197 problems.<n>The algorithm discovers on its own many interesting induction predicates, ultimately solving 5565 problems.
- Score: 4.373803477995854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop a self-learning approach for conjecturing of induction predicates on a dataset of 16197 problems derived from the OEIS. These problems are hard for today's SMT and ATP systems because they require a combination of inductive and arithmetical reasoning. Starting from scratch, our approach consists of a feedback loop that iterates between (i) training a neural translator to learn the correspondence between the problems solved so far and the induction predicates useful for them, (ii) using the trained neural system to generate many new induction predicates for the problems, (iii) fast runs of the z3 prover attempting to prove the problems using the generated predicates, (iv) using heuristics such as predicate size and solution speed on the proved problems to choose the best predicates for the next iteration of training. The algorithm discovers on its own many interesting induction predicates, ultimately solving 5565 problems, compared to 2265 problems solved by CVC5, Vampire or Z3 in 60 seconds.
Related papers
- EHOP: A Dataset of Everyday NP-Hard Optimization Problems [66.41749917354159]
Everyday Hard Optimization Problems (EHOP) is a collection of NP-hard optimization problems expressed in natural language.<n>EHOP includes problem formulations that could be found in computer science textbooks, versions that are dressed up as problems that could arise in real life, and variants of well-known problems with inverted rules.<n>We find that state-of-the-art LLMs, across multiple prompting strategies, systematically solve textbook problems more accurately than their real-life and inverted counterparts.
arXiv Detail & Related papers (2025-02-19T14:39:59Z) - A Smooth Transition Between Induction and Deduction: Fast Abductive Learning Based on Probabilistic Symbol Perception [81.30687085692576]
We introduce an optimization algorithm named as Probabilistic Symbol Perception (PSP), which makes a smooth transition between induction and deduction.
Experiments demonstrate the promising results.
arXiv Detail & Related papers (2025-02-18T14:59:54Z) - Combining Induction and Transduction for Abstract Reasoning [13.399370315305408]
We train neural models for induction (inferring latent functions) and transduction (directly predicting the test output for a given test input) on ARC.<n>We find inductive and transductive models solve different kinds of test problems, despite having the same training problems and sharing the same neural architecture.
arXiv Detail & Related papers (2024-11-04T17:03:55Z) - Exact Enforcement of Temporal Continuity in Sequential Physics-Informed
Neural Networks [0.0]
We introduce a method to enforce continuity between successive time segments via a solution ansatz.
The method is tested for a number of benchmark problems involving both linear and non-linear PDEs.
The numerical experiments conducted with the proposed method demonstrated superior convergence and accuracy over both traditional PINNs and the soft-constrained counterparts.
arXiv Detail & Related papers (2024-02-15T17:41:02Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement [50.62461749446111]
Self-Polish (SP) is a novel method that facilitates the model's reasoning by guiding it to progressively refine the given problems to be more comprehensible and solvable.
SP is to all other prompting methods of answer/reasoning side like CoT, allowing for seamless integration with state-of-the-art techniques for further improvement.
arXiv Detail & Related papers (2023-05-23T19:58:30Z) - Neural Algorithmic Reasoning for Combinatorial Optimisation [20.36694807847833]
We propose leveraging recent advancements in neural reasoning to improve the learning of CO problems.
We suggest pre-training our neural model on relevant algorithms before training it on CO instances.
Our results demonstrate that by using this learning setup, we achieve superior performance compared to non-algorithmically informed deep learning models.
arXiv Detail & Related papers (2023-05-18T13:59:02Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Least-to-Most Prompting Enables Complex Reasoning in Large Language
Models [52.59923418570378]
We propose a novel prompting strategy, least-to-most prompting, to overcome the challenge of easy-to-hard generalization.
We show that least-to-most prompting is capable of generalizing to more difficult problems than those seen in prompts.
neural-symbolic models in the literature that specialize in solving SCAN are trained on the entire training set containing over 15,000 examples.
arXiv Detail & Related papers (2022-05-21T15:34:53Z) - Sublinear Least-Squares Value Iteration via Locality Sensitive Hashing [49.73889315176884]
We present the first provable Least-Squares Value Iteration (LSVI) algorithms that have runtime complexity sublinear in the number of actions.
We build the connections between the theory of approximate maximum inner product search and the regret analysis of reinforcement learning.
arXiv Detail & Related papers (2021-05-18T05:23:53Z) - Definitional Quantifiers Realise Semantic Reasoning for Proof by
Induction [6.85316573653194]
SeLFiE is a query language to represent users' knowledge on how to apply the induct tactic in Isabelle/HOL.
For evaluation we build an automatic induction prover using SeLFiE.
Our new prover achieves 1.4 x 103% improvement over the corresponding baseline prover for 1.0 second timeout and the median value of speedup is 4.48x.
arXiv Detail & Related papers (2020-10-19T09:05:09Z) - Faster Smarter Induction in Isabelle/HOL [6.85316573653194]
sem_ind recommends what arguments to pass to the induct method.
definitional quantifiers allow us to investigate not only the syntactic structures of inductive problems but also the definitions of relevant constants in a domain-agnostic style.
arXiv Detail & Related papers (2020-09-19T11:51:54Z) - Strong Generalization and Efficiency in Neural Programs [69.18742158883869]
We study the problem of learning efficient algorithms that strongly generalize in the framework of neural program induction.
By carefully designing the input / output interfaces of the neural model and through imitation, we are able to learn models that produce correct results for arbitrary input sizes.
arXiv Detail & Related papers (2020-07-07T17:03:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.