Definitional Quantifiers Realise Semantic Reasoning for Proof by
Induction
- URL: http://arxiv.org/abs/2010.10296v2
- Date: Fri, 20 May 2022 19:54:42 GMT
- Title: Definitional Quantifiers Realise Semantic Reasoning for Proof by
Induction
- Authors: Yutaka Nagashima
- Abstract summary: SeLFiE is a query language to represent users' knowledge on how to apply the induct tactic in Isabelle/HOL.
For evaluation we build an automatic induction prover using SeLFiE.
Our new prover achieves 1.4 x 103% improvement over the corresponding baseline prover for 1.0 second timeout and the median value of speedup is 4.48x.
- Score: 6.85316573653194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Proof assistants offer tactics to apply proof by induction, but these tactics
rely on inputs given by human engineers. To automate this laborious process, we
developed SeLFiE, a boolean query language to represent experienced users'
knowledge on how to apply the induct tactic in Isabelle/HOL: when we apply an
induction heuristic written in SeLFiE to an inductive problem and arguments to
the induct tactic, the SeLFiE interpreter judges whether the arguments are
plausible for that problem according to the heuristic by examining both the
syntactic structure of the problem and definitions of the relevant constants.
To examine the intricate interaction between syntactic analysis and analysis of
constant definitions, we introduce definitional quantifiers. For evaluation we
build an automatic induction prover using SeLFiE. Our evaluation based on 347
inductive problems shows that our new prover achieves 1.4 x 10^3% improvement
over the corresponding baseline prover for 1.0 second of timeout and the median
value of speedup is 4.48x.
Related papers
- Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs [58.620269228776294]
We propose a task-agnostic framework for resolving ambiguity by asking users clarifying questions.
We evaluate systems across three NLP applications: question answering, machine translation and natural language inference.
We find that intent-sim is robust, demonstrating improvements across a wide range of NLP tasks and LMs.
arXiv Detail & Related papers (2023-11-16T00:18:50Z) - LINC: A Neurosymbolic Approach for Logical Reasoning by Combining
Language Models with First-Order Logic Provers [60.009969929857704]
Logical reasoning is an important task for artificial intelligence with potential impacts on science, mathematics, and society.
In this work, we reformulating such tasks as modular neurosymbolic programming, which we call LINC.
We observe significant performance gains on FOLIO and a balanced subset of ProofWriter for three different models in nearly all experimental conditions we evaluate.
arXiv Detail & Related papers (2023-10-23T17:58:40Z) - Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement [92.61557711360652]
Language models (LMs) often fall short on inductive reasoning, despite achieving impressive success on research benchmarks.
We conduct a systematic study of the inductive reasoning capabilities of LMs through iterative hypothesis refinement.
We reveal several discrepancies between the inductive reasoning processes of LMs and humans, shedding light on both the potentials and limitations of using LMs in inductive reasoning tasks.
arXiv Detail & Related papers (2023-10-12T17:51:10Z) - Leveraging Affirmative Interpretations from Negation Improves Natural
Language Understanding [10.440501875161003]
Negation poses a challenge in many natural language understanding tasks.
We show that doing so benefits models for three natural language understanding tasks.
We build a plug-and-play neural generator that given a negated statement generates an affirmative interpretation.
arXiv Detail & Related papers (2022-10-26T05:22:27Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Maieutic Prompting: Logically Consistent Reasoning with Recursive
Explanations [71.2950434944196]
We develop Maieutic Prompting, which infers a correct answer to a question even from the noisy and inconsistent generations of language models.
Maieutic Prompting achieves up to 20% better accuracy than state-of-the-art prompting methods.
arXiv Detail & Related papers (2022-05-24T06:36:42Z) - Probing as Quantifying the Inductive Bias of Pre-trained Representations [99.93552997506438]
We present a novel framework for probing where the goal is to evaluate the inductive bias of representations for a particular task.
We apply our framework to a series of token-, arc-, and sentence-level tasks.
arXiv Detail & Related papers (2021-10-15T22:01:16Z) - The Language Model Understood the Prompt was Ambiguous: Probing
Syntactic Uncertainty Through Generation [23.711953448400514]
We inspect to which extent neural language models (LMs) exhibit uncertainty over such analyses.
We find that LMs can track multiple analyses simultaneously.
As a response to disambiguating cues, the LMs often select the correct interpretation, but occasional errors point to potential areas of improvement.
arXiv Detail & Related papers (2021-09-16T10:27:05Z) - Faster Smarter Induction in Isabelle/HOL [6.85316573653194]
sem_ind recommends what arguments to pass to the induct method.
definitional quantifiers allow us to investigate not only the syntactic structures of inductive problems but also the definitions of relevant constants in a domain-agnostic style.
arXiv Detail & Related papers (2020-09-19T11:51:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.