Partial Label Learning for Automated Theorem Proving
- URL: http://arxiv.org/abs/2507.03314v1
- Date: Fri, 04 Jul 2025 05:54:27 GMT
- Title: Partial Label Learning for Automated Theorem Proving
- Authors: Zsolt Zombori, Balázs Indruck,
- Abstract summary: We formulate learning guided Automated Theorem Proving as Partial Label Learning.<n>We use the plCoP theorem prover to demonstrate that methods from the Partial Label Learning literature tend to increase the performance of learning assisted theorem provers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We formulate learning guided Automated Theorem Proving as Partial Label Learning, building the first bridge across these fields of research and providing a theoretical framework for dealing with alternative proofs during learning. We use the plCoP theorem prover to demonstrate that methods from the Partial Label Learning literature tend to increase the performance of learning assisted theorem provers.
Related papers
- Personalized Exercise Recommendation with Semantically-Grounded Knowledge Tracing [54.44838681588145]
ExRec is a framework for personalized exercise recommendation with semantically-grounded knowledge tracing.<n>We show that ExRec generalizes robustly to new, unseen questions and that it produces interpretable student learning trajectories.
arXiv Detail & Related papers (2025-07-15T07:54:04Z) - DeepTheorem: Advancing LLM Reasoning for Theorem Proving Through Natural Language and Reinforcement Learning [67.93945726549289]
DeepTheorem is a comprehensive informal theorem-proving framework exploiting natural language to enhance mathematical reasoning.<n>DeepTheorem includes a large-scale benchmark dataset consisting of 121K high-quality IMO-level informal theorems and proofs.<n>We devise a novel reinforcement learning strategy (RL-Zero) explicitly tailored to informal theorem proving, leveraging the verified theorem variants to incentivize robust mathematical inference.
arXiv Detail & Related papers (2025-05-29T17:59:39Z) - Online inductive learning from answer sets for efficient reinforcement learning exploration [52.03682298194168]
We exploit inductive learning of answer set programs to learn a set of logical rules representing an explainable approximation of the agent policy.<n>We then perform answer set reasoning on the learned rules to guide the exploration of the learning agent at the next batch.<n>Our methodology produces a significant boost in the discounted return achieved by the agent, even in the first batches of training.
arXiv Detail & Related papers (2025-01-13T16:13:22Z) - Learning Rules Explaining Interactive Theorem Proving Tactic Prediction [5.229806149125529]
We represent the problem as an Inductive Logic Programming (ILP) task.
Using the ILP representation we enriched the feature space by encoding additional, computationally expensive properties.
We use this enriched feature space to learn rules explaining when a tactic is applicable to a given proof state.
arXiv Detail & Related papers (2024-11-02T09:18:33Z) - Decorrelation-based Self-Supervised Visual Representation Learning for Writer Identification [10.55096104577668]
We explore the decorrelation-based paradigm of self-supervised learning and apply the same to learning disentangled stroke features for writer identification.
We show that the proposed framework outperforms the contemporary self-supervised learning framework on the writer identification benchmark.
To the best of our knowledge, this work is the first of its kind to apply self-supervised learning for learning representations for writer verification tasks.
arXiv Detail & Related papers (2024-10-02T11:43:58Z) - Lean-STaR: Learning to Interleave Thinking and Proving [53.923617816215774]
We present Lean-STaR, a framework for training language models to produce informal thoughts prior to each step of a proof.<n>Lean-STaR achieves state-of-the-art results on the miniF2F-test benchmark within the Lean theorem proving environment.
arXiv Detail & Related papers (2024-07-14T01:43:07Z) - Aligning Logits Generatively for Principled Black-Box Knowledge Distillation [49.43567344782207]
Black-Box Knowledge Distillation (B2KD) is a formulated problem for cloud-to-edge model compression with invisible data and models hosted on the server.
We formalize a two-step workflow consisting of deprivatization and distillation.
We propose a new method Mapping-Emulation KD (MEKD) that distills a black-box cumbersome model into a lightweight one.
arXiv Detail & Related papers (2022-05-21T02:38:16Z) - Learning to Guide a Saturation-Based Theorem Prover [9.228237801323042]
TRAIL is a deep learning-based approach to theorem proving that characterizes core elements of saturation-based theorem proving within a neural framework.
To the best of our knowledge, TRAIL is the first reinforcement learning-based approach to exceed the performance of a state-of-the-art traditional theorem prover.
arXiv Detail & Related papers (2021-06-07T18:35:57Z) - TacticZero: Learning to Prove Theorems from Scratch with Deep
Reinforcement Learning [6.764610878007278]
We propose a novel approach to interactive theorem-proving (ITP) using deep reinforcement learning.
We formulate the process of ITP as a Markov decision process (MDP) in which each state represents a set of potential derivation paths.
We show that the framework provides comparable performance to that of the approaches that use human experts.
arXiv Detail & Related papers (2021-02-19T06:08:39Z) - Generative Language Modeling for Automated Theorem Proving [94.01137612934842]
This work is motivated by the possibility that a major limitation of automated theorem provers compared to humans might be addressable via generation from language models.
We present an automated prover and proof assistant, GPT-f, for the Metamath formalization language, and analyze its performance.
arXiv Detail & Related papers (2020-09-07T19:50:10Z) - Learning to Prove Theorems by Learning to Generate Theorems [71.46963489866596]
We learn a neural generator that automatically synthesizes theorems and proofs for the purpose of training a theorem prover.
Experiments on real-world tasks demonstrate that synthetic data from our approach improves the theorem prover.
arXiv Detail & Related papers (2020-02-17T16:06:02Z) - An Experimental Study of Formula Embeddings for Automated Theorem
Proving in First-Order Logic [13.633012068936894]
We study and experimentally compare pattern-based embeddings that are applied in current systems with popular graph-based encodings.
Our experiments show that the advantages of simpler encoding schemes in terms of runtime are outdone by more complex graph-based embeddings.
arXiv Detail & Related papers (2020-02-02T16:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.