Online Machine Learning Techniques for Coq: A Comparison
- URL: http://arxiv.org/abs/2104.05207v1
- Date: Mon, 12 Apr 2021 05:12:35 GMT
- Title: Online Machine Learning Techniques for Coq: A Comparison
- Authors: Liao Zhang, Lasse Blaauwbroek, Bartosz Piotrowski, Prokop \v{C}ern\'y,
Cezary Kaliszyk, and Josef Urban
- Abstract summary: We present a comparison of several online machine learning techniques for tactical learning and proving in the Coq proof assistant.
This work builds on Tactician, a plugin for Coq that learns from proofs written by the user to synthesize new proofs.
- Score: 2.9412498294532856
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a comparison of several online machine learning techniques for
tactical learning and proving in the Coq proof assistant. This work builds on
top of Tactician, a plugin for Coq that learns from proofs written by the user
to synthesize new proofs. This learning happens in an online manner -- meaning
that Tactician's machine learning model is updated immediately every time the
user performs a step in an interactive proof. This has important advantages
compared to the more studied offline learning systems: (1) it provides the user
with a seamless, interactive experience with Tactician and, (2) it takes
advantage of locality of proof similarity, which means that proofs similar to
the current proof are likely to be found close by. We implement two online
methods, namely approximate $k$-nearest neighbors based on locality sensitive
hashing forests and random decision forests. Additionally, we conduct
experiments with gradient boosted trees in an offline setting using XGBoost. We
compare the relative performance of Tactician using these three learning
methods on Coq's standard library.
Related papers
- Adapting Vision-Language Models to Open Classes via Test-Time Prompt Tuning [50.26965628047682]
Adapting pre-trained models to open classes is a challenging problem in machine learning.
In this paper, we consider combining the advantages of both and come up with a test-time prompt tuning approach.
Our proposed method outperforms all comparison methods on average considering both base and new classes.
arXiv Detail & Related papers (2024-08-29T12:34:01Z) - NeCo: Improving DINOv2's spatial representations in 19 GPU hours with Patch Neighbor Consistency [35.768260232640756]
We introduce NeCo: Patch Neighbor Consistency, a novel training loss that enforces patch-level nearest neighbor consistency across a student and teacher model.
Our method leverages a differentiable sorting method applied on top of pretrained representations, such as DINOv2-registers to bootstrap the learning signal.
This dense post-pretraining leads to superior performance across various models and datasets, despite requiring only 19 hours on a single GPU.
arXiv Detail & Related papers (2024-08-20T17:58:59Z) - Graph2Tac: Online Representation Learning of Formal Math Concepts [0.6597195879147557]
In proof, the physical proximity between two formal mathematical concepts is a strong predictor of their mutual relevance.
We show that this locality property can be exploited through online learning techniques to obtain solving agents that far surpass offline learners.
We benchmark two such online solvers implemented in the Tactician platform for the Coq proof assistant.
arXiv Detail & Related papers (2024-01-05T18:52:09Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - CLAWSAT: Towards Both Robust and Accurate Code Models [74.57590254102311]
We integrate contrastive learning (CL) with adversarial learning to co-optimize the robustness and accuracy of code models.
To the best of our knowledge, this is the first systematic study to explore and exploit the robustness and accuracy benefits of (multi-view) code obfuscations in code models.
arXiv Detail & Related papers (2022-11-21T18:32:50Z) - Co-training an Unsupervised Constituency Parser with Weak Supervision [33.63314110665062]
We introduce a method for unsupervised parsing that relies on bootstrapping classifiers to identify if a node dominates a specific span in a sentence.
We show that the interplay between them helps improve the accuracy of both, and as a result, effectively parse.
arXiv Detail & Related papers (2021-10-05T18:45:06Z) - Transductive Learning for Unsupervised Text Style Transfer [60.65782243927698]
Unsupervised style transfer models are mainly based on an inductive learning approach.
We propose a novel transductive learning approach based on a retrieval-based context-aware style representation.
arXiv Detail & Related papers (2021-09-16T08:57:20Z) - CoMatch: Semi-supervised Learning with Contrastive Graph Regularization [86.84486065798735]
CoMatch is a new semi-supervised learning method that unifies dominant approaches.
It achieves state-of-the-art performance on multiple datasets.
arXiv Detail & Related papers (2020-11-23T02:54:57Z) - The Tactician (extended version): A Seamless, Interactive Tactic Learner
and Prover for Coq [0.6445605125467572]
We present Tactician, a tactic learner and prover for the Coq Proof Assistant.
Tactician helps users make tactical proof decisions while they retain control over the general proof strategy.
We give an overview of Tactician from the user's point of view, regarding both day-to-day usage and issues of package dependency management.
arXiv Detail & Related papers (2020-07-31T23:47:29Z) - Rethinking Few-Shot Image Classification: a Good Embedding Is All You
Need? [72.00712736992618]
We show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, outperforms state-of-the-art few-shot learning methods.
An additional boost can be achieved through the use of self-distillation.
We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.
arXiv Detail & Related papers (2020-03-25T17:58:42Z) - Tactic Learning and Proving for the Coq Proof Assistant [0.5735035463793007]
Our system predicts appropriate tactics and finds proofs in the form of tactic scripts.
The performance of the system is evaluated on the Coq Standard Library.
When combined with the CoqHammer system, the two systems together prove 56.7% of the library's lemmas.
arXiv Detail & Related papers (2020-03-20T08:22:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.