Faster Smarter Induction in Isabelle/HOL
- URL: http://arxiv.org/abs/2009.09215v4
- Date: Sun, 9 May 2021 07:58:26 GMT
- Title: Faster Smarter Induction in Isabelle/HOL
- Authors: Yutaka Nagashima
- Abstract summary: sem_ind recommends what arguments to pass to the induct method.
definitional quantifiers allow us to investigate not only the syntactic structures of inductive problems but also the definitions of relevant constants in a domain-agnostic style.
- Score: 6.85316573653194
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Proof by induction plays a critical role in formal verification and
mathematics at large. However, its automation remains as one of the
long-standing challenges in Computer Science. To address this problem, we
developed sem_ind. Given inductive problem, sem_ind recommends what arguments
to pass to the induct method. To improve the accuracy of sem_ind, we introduced
definitional quantifiers, a new kind of quantifiers that allow us to
investigate not only the syntactic structures of inductive problems but also
the definitions of relevant constants in a domain-agnostic style. Our
evaluation shows that compared to its predecessor sem_ind improves the accuracy
of recommendation from 20.1% to 38.2% for the most promising candidates within
5.0 seconds of timeout while decreasing the median value of execution time from
2.79 seconds to 1.06 seconds.
Related papers
- LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Feedback [71.95402654982095]
We propose textbfMath-Minos, a natural language feedback enhanced verifier.
Our experiments reveal that a small set (30k) of natural language feedbacks can significantly boost the performance of the verifier.
arXiv Detail & Related papers (2024-06-20T06:42:27Z) - Fast Ergodic Search with Kernel Functions [0.4915744683251149]
We develop a kernel-based ergodic metric and generalize it from Euclidean space to Lie groups.
We derive the first-order optimality condition of the kernel ergodic metric for nonlinear systems.
Comprehensive numerical benchmarks show that the proposed method is at least two orders of magnitude faster than the state-of-the-art algorithm.
arXiv Detail & Related papers (2024-03-03T15:30:31Z) - Deep Declarative Dynamic Time Warping for End-to-End Learning of
Alignment Paths [54.53208538517505]
This paper addresses learning end-to-end models for time series data that include a temporal alignment step via dynamic time warping (DTW)
We propose a DTW layer based around bi-level optimisation and deep declarative networks, which we name DecDTW.
We show that this property is particularly useful for applications where downstream loss functions are defined on the optimal alignment path itself.
arXiv Detail & Related papers (2023-03-19T21:58:37Z) - Accelerating Attention through Gradient-Based Learned Runtime Pruning [9.109136535767478]
Self-attention is a key enabler of state-of-art accuracy for transformer-based Natural Language Processing models.
This paper formulates its search through a soft differentiable regularizer integrated into the loss function of the training.
We devise a bit-serial architecture, dubbed LeOPArd, for transformer language models with bit-level early termination microarchitectural mechanism.
arXiv Detail & Related papers (2022-04-07T05:31:13Z) - On the Second-order Convergence Properties of Random Search Methods [7.788439526606651]
We prove that standard randomsearch methods that do not rely on second-order information converge to a second-order stationary point.
We propose a novel variant of negative curvature by relying only on function evaluations.
arXiv Detail & Related papers (2021-10-25T20:59:31Z) - Train Short, Test Long: Attention with Linear Biases Enables Input
Length Extrapolation [62.51758040848735]
We introduce a simple and efficient method, Attention with Linear Biases (ALiBi), that allows for extrapolation.
ALiBi does not add positional embeddings to the word embeddings; instead, it biases the query-key attention scores with a term that is proportional to their distance.
We show that this method allows training a 1.3 billion parameter model on input sequences of length 1024 that extrapolates to input sequences of length 2048, achieving the same perplexity as a sinusoidal position embedding model trained on inputs of length 2048.
arXiv Detail & Related papers (2021-08-27T17:35:06Z) - A2P-MANN: Adaptive Attention Inference Hops Pruned Memory-Augmented
Neural Networks [3.682712058535653]
We propose an online adaptive approach called A2P-MANN to limit the number of required attention inference hops in memory-augmented neural networks.
The technique results in elimination of a large number of unnecessary computations in extracting the correct answer.
The efficacy of the technique is assessed by using the twenty question-answering (QA) tasks of bAbI dataset.
arXiv Detail & Related papers (2021-01-24T12:02:12Z) - Definitional Quantifiers Realise Semantic Reasoning for Proof by
Induction [6.85316573653194]
SeLFiE is a query language to represent users' knowledge on how to apply the induct tactic in Isabelle/HOL.
For evaluation we build an automatic induction prover using SeLFiE.
Our new prover achieves 1.4 x 103% improvement over the corresponding baseline prover for 1.0 second timeout and the median value of speedup is 4.48x.
arXiv Detail & Related papers (2020-10-19T09:05:09Z) - PRover: Proof Generation for Interpretable Reasoning over Rules [81.40404921232192]
We propose a transformer-based model that answers binary questions over rule-bases and generates the corresponding proofs.
Our model learns to predict nodes and edges corresponding to proof graphs in an efficient constrained training paradigm.
We conduct experiments on synthetic, hand-authored, and human-paraphrased rule-bases to show promising results for QA and proof generation.
arXiv Detail & Related papers (2020-10-06T15:47:53Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - From Checking to Inference: Actual Causality Computations as
Optimization Problems [79.87179017975235]
We present a novel approach to formulate different notions of causal reasoning, over binary acyclic models, as optimization problems.
We show that both notions are efficiently automated. Using models with more than $8000$ variables, checking is computed in a matter of seconds, with MaxSAT outperforming ILP in many cases.
arXiv Detail & Related papers (2020-06-05T10:56:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.