Quantifier Instantiations: To Mimic or To Revolt?
- URL: http://arxiv.org/abs/2508.13811v1
- Date: Tue, 19 Aug 2025 13:16:25 GMT
- Title: Quantifier Instantiations: To Mimic or To Revolt?
- Authors: Jan Jakubův, Mikoláš Janota,
- Abstract summary: Quantified formulas pose a significant challenge for Satisfiability Modulo Theories (SMT) solvers.<n>Existing instantiation techniques, such as e-matching, syntax-guided, model-based, conflict-based, and enumerative methods, often complement each other.<n>This paper introduces a novel instantiation approach that dynamically learns from these techniques during solving.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantified formulas pose a significant challenge for Satisfiability Modulo Theories (SMT) solvers due to their inherent undecidability. Existing instantiation techniques, such as e-matching, syntax-guided, model-based, conflict-based, and enumerative methods, often complement each other. This paper introduces a novel instantiation approach that dynamically learns from these techniques during solving. By treating observed instantiations as samples from a latent language, we use probabilistic context-free grammars to generate new, similar terms. Our method not only mimics successful past instantiations but also explores diversity by optionally inverting learned term probabilities, aiming to balance exploitation and exploration in quantifier reasoning.
Related papers
- Solving Inverse Problems in Stochastic Self-Organising Systems through Invariant Representations [12.394699094197545]
Self-organising systems demonstrate how simple local rules can generate complex patterns.<n>Many natural systems rely on such dynamics, making self-organisation central to understanding natural complexity.<n>A fundamental challenge in modelling such systems is solving the inverse problem: finding the unknown causal parameters from macroscopic observations.
arXiv Detail & Related papers (2025-06-13T14:01:39Z) - Unifying and extending Diffusion Models through PDEs for solving Inverse Problems [3.1225172236361165]
Diffusion models have emerged as powerful generative tools with applications in computer vision and scientific machine learning (SciML)<n>Traditionally, these models have been derived using principles of variational inference, denoising, statistical signal processing, and differential equations.<n>In this study we derive diffusion models using ideas from linear partial differential equations and demonstrate that this approach has several benefits.
arXiv Detail & Related papers (2025-04-10T04:07:36Z) - Causality can systematically address the monsters under the bench(marks) [64.36592889550431]
Benchmarks are plagued by various biases, artifacts, or leakage.<n>Models may behave unreliably due to poorly explored failure modes.<n> causality offers an ideal framework to systematically address these challenges.
arXiv Detail & Related papers (2025-02-07T17:01:37Z) - Toward Understanding In-context vs. In-weight Learning [50.24035812301655]
We identify simplified distributional properties that give rise to the emergence and disappearance of in-context learning.<n>We then extend the study to a full large language model, showing how fine-tuning on various collections of natural language prompts can elicit similar in-context and in-weight learning behaviour.
arXiv Detail & Related papers (2024-10-30T14:09:00Z) - On Conditional and Compositional Language Model Differentiable Prompting [75.76546041094436]
Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model (PLM) to perform well on downstream tasks.
We propose a new model, Prompt Production System (PRopS), which learns to transform task instructions or input metadata, into continuous prompts.
arXiv Detail & Related papers (2023-07-04T02:47:42Z) - A Hybrid System for Systematic Generalization in Simple Arithmetic
Problems [70.91780996370326]
We propose a hybrid system capable of solving arithmetic problems that require compositional and systematic reasoning over sequences of symbols.
We show that the proposed system can accurately solve nested arithmetical expressions even when trained only on a subset including the simplest cases.
arXiv Detail & Related papers (2023-06-29T18:35:41Z) - Learning to Reason With Relational Abstractions [65.89553417442049]
We study how to build stronger reasoning capability in language models using the idea of relational abstractions.
We find that models that are supplied with such sequences as prompts can solve tasks with a significantly higher accuracy.
arXiv Detail & Related papers (2022-10-06T00:27:50Z) - Compositional Semantic Parsing with Large Language Models [27.627684573915147]
We identify challenges in more realistic semantic parsing tasks with larger vocabulary.
Our best method is based on least-to-most prompting.
We expect similar efforts will lead to new results in other tasks and domains.
arXiv Detail & Related papers (2022-09-29T17:58:28Z) - A Contrastive Framework for Neural Text Generation [46.845997620234265]
We show that an underlying reason for model degeneration is the anisotropic distribution of token representations.
We present a contrastive solution: (i) SimCTG, a contrastive training objective to calibrate the model's representation space, and (ii) a decoding method -- contrastive search -- to encourage diversity while maintaining coherence in the generated text.
arXiv Detail & Related papers (2022-02-13T21:46:14Z) - Multi-sense embeddings through a word sense disambiguation process [2.2344764434954256]
Most Suitable Sense.
(MSSA) disambiguates and annotates each word by its specific sense, considering the semantic effects of its context.
We test our approach on six different benchmarks for the word similarity task, showing that our approach can produce state-of-the-art results.
arXiv Detail & Related papers (2021-01-21T16:22:34Z) - On the Transferability of Adversarial Attacksagainst Neural Text
Classifier [121.6758865857686]
We investigate the transferability of adversarial examples for text classification models.
We propose a genetic algorithm to find an ensemble of models that can induce adversarial examples to fool almost all existing models.
We derive word replacement rules that can be used for model diagnostics from these adversarial examples.
arXiv Detail & Related papers (2020-11-17T10:45:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.