SMT(LIA) Sampling with High Diversity
- URL: http://arxiv.org/abs/2503.04782v2
- Date: Mon, 15 Sep 2025 07:08:14 GMT
- Title: SMT(LIA) Sampling with High Diversity
- Authors: Yong Lai, Junjie Li, Chuan Luo,
- Abstract summary: We have developed the first sampling framework that integrates local search with CDCL(T) techniques.<n>HighDiv is capable of generating a highly diverse set of solutions for constraints under linear integer theory.
- Score: 12.198700284977962
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Satisfiability Modulo Linear Integer Arithmetic, SMT(LIA) for short, is pivotal across various critical domains. Previous research has primarily focused on SMT solving techniques. However, in practical applications such as software and hardware testing, there is a need to generate a diverse set of solutions for use as test inputs. We have developed the first sampling framework that integrates local search with CDCL(T) techniques, named HighDiv, capable of generating a highly diverse set of solutions for constraints under linear integer theory. Initially, in the local search phase, we introduced a novel operator called boundary-aware movement. This operator performs random moves by considering the current state's constraints on variables, thereby enhancing the diversity of variables during the search process. Furthermore, we have conducted an in-depth study of the preprocessing and variable initialization mechanisms within the framework, which significantly enhances the efficiency of subsequent local searches. Lastly, we use the solutions obtained from local search sampling as additional constraints to further explore the solution space using the stochastic CDCL(T) method. Experimental results demonstrate that \HighDiv generates solutions with greater diversity compared to the state-of-the-art SMT(LIA) sampling tool, MeGASampler.
Related papers
- Variable Search Stepsize for Randomized Local Search in Multi-Objective Combinatorial Optimization [12.70422671595511]
We present a simple yet effective local search method, called variable stepsize randomized local search (VS-RLS)<n> VS-RLS transitions gradually from a broad, exploratory search in the early phases to a more focused, fine-grained search as the search progresses.<n>We demonstrate the effectiveness and generalizability of VS-RLS through extensive evaluations against local search and multi-objective evolutionary algorithms.
arXiv Detail & Related papers (2026-02-05T13:59:05Z) - MultiGA: Leveraging Multi-Source Seeding in Genetic Algorithms [8.975943388046058]
Large Language Models (LLMs) are widely used across research domains to tackle complex tasks, but their performance can vary significantly depending on the task at hand.<n>We introduce a novel approach, MultiGA, which applies genetic algorithm principles to address complex natural language tasks and reasoning problems.<n>We benchmark our approach using text-to-code generation tasks, trip planning, GPQA benchmark for grad-level science questions, and the BBQ bias benchmark.
arXiv Detail & Related papers (2025-11-21T21:47:33Z) - Towards Comprehensive Sampling of SMT Solutions [17.952635077005144]
PanSampler is a novel SMT sampler that achieves high coverage with a small number of solutions.<n>It incorporates three novel techniques, i.e., diversity-aware SMT algorithm, abstract syntax tree (AST)-guided scoring function and post-sampling optimization technology.<n>PanSampler exhibits a significantly stronger capability to reach high target coverage, while requiring fewer solutions than current samplers to achieve the same coverage level.
arXiv Detail & Related papers (2025-11-13T14:00:00Z) - Quantization Meets dLLMs: A Systematic Study of Post-training Quantization for Diffusion LLMs [78.09559830840595]
We present the first systematic study on quantizing diffusion-based language models.<n>We identify the presence of activation outliers, characterized by abnormally large activation values.<n>We implement state-of-the-art PTQ methods and conduct a comprehensive evaluation.
arXiv Detail & Related papers (2025-08-20T17:59:51Z) - Go With the Flow: Fast Diffusion for Gaussian Mixture Models [16.07896640031724]
Schrodinger Bridges (SBs) are diffusion processes that steer in finite time, a given initial distribution to another final one while minimizing a suitable cost functional.<n>We propose an analytic parametrization of a set of feasible policies for solving low dimensional problems.<n>We showcase the potential of this approach in low-to-image problems such as image-to-image translation in the latent space of an autoencoder, learning of cellular dynamics using multi-marginal momentum SB problems and various other examples.
arXiv Detail & Related papers (2024-12-12T08:40:22Z) - Factorization of Multi-Agent Sampling-Based Motion Planning [72.42734061131569]
Modern robotics often involves multiple embodied agents operating within a shared environment.
Standard sampling-based algorithms can be used to search for solutions in the robots' joint space.
We integrate the concept of factorization into sampling-based algorithms, which requires only minimal modifications to existing methods.
We present a general implementation of a factorized SBA, derive an analytical gain in terms of sample complexity for PRM*, and showcase empirical results for RRG.
arXiv Detail & Related papers (2023-04-01T15:50:18Z) - Numerical Methods for Convex Multistage Stochastic Optimization [86.45244607927732]
We focus on optimisation programming (SP), Optimal Control (SOC) and Decision Processes (MDP)
Recent progress in solving convex multistage Markov problems is based on cutting planes approximations of the cost-to-go functions of dynamic programming equations.
Cutting plane type methods can handle multistage problems with a large number of stages, but a relatively smaller number of state (decision) variables.
arXiv Detail & Related papers (2023-03-28T01:30:40Z) - Global and Preference-based Optimization with Mixed Variables using Piecewise Affine Surrogates [0.6083861980670925]
This paper proposes a novel surrogate-based global optimization algorithm to solve linearly constrained mixed-variable problems.<n>The proposed approach is based on constructing a piecewise affine surrogate of the objective function over feasible samples.<n>The two algorithms are evaluated on several unconstrained and constrained mixed-variable benchmark problems.
arXiv Detail & Related papers (2023-02-09T15:04:35Z) - Near-optimal Policy Identification in Active Reinforcement Learning [84.27592560211909]
AE-LSVI is a novel variant of the kernelized least-squares value RL (LSVI) algorithm that combines optimism with pessimism for active exploration.
We show that AE-LSVI outperforms other algorithms in a variety of environments when robustness to the initial state is required.
arXiv Detail & Related papers (2022-12-19T14:46:57Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Minimax Optimization: The Case of Convex-Submodular [50.03984152441271]
Minimax problems extend beyond the continuous domain to mixed continuous-discrete domains or even fully discrete domains.
We introduce the class of convex-submodular minimax problems, where the objective is convex with respect to the continuous variable and submodular with respect to the discrete variable.
Our proposed algorithms are iterative and combine tools from both discrete and continuous optimization.
arXiv Detail & Related papers (2021-11-01T21:06:35Z) - Output Space Entropy Search Framework for Multi-Objective Bayesian
Optimization [32.856318660282255]
Black-box multi-objective optimization (MOO) using expensive function evaluations (also referred to as experiments)
We propose a general framework for solving MOO problems based on the principle of output space entropy (OSE) search.
Our OSE search based algorithms improve over state-of-the-art methods in terms of both computational-efficiency and accuracy of MOO solutions.
arXiv Detail & Related papers (2021-10-13T18:43:39Z) - Adaptive Local Bayesian Optimization Over Multiple Discrete Variables [9.860437640748113]
This paper describes the approach of team KAIST OSI in a step-wise manner, which outperforms the baseline algorithms by up to +20.39%.
In a similar vein, we combine the methodology of Bayesian and multi-armed bandit,(MAB) approach to select the values with the consideration of the variable types.
Empirical evaluations demonstrate that our method outperforms the existing methods across different tasks.
arXiv Detail & Related papers (2020-12-07T07:51:23Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Best Principal Submatrix Selection for the Maximum Entropy Sampling
Problem: Scalable Algorithms and Performance Guarantees [1.7640556247739623]
MESP has been widely applied to many areas, including healthcare, power system, manufacturing and data science.
We derive a novel convex integer program for MESP and show that its continuous relaxation yields a near-optimal solution.
Our numerical experiments demonstrate that these approximation algorithms can efficiently solve medium-sized and large-scale instances to near-optimality.
arXiv Detail & Related papers (2020-01-23T14:14:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.