How Low Can We Go? Minimizing Interaction Samples for Configurable Systems
- URL: http://arxiv.org/abs/2501.06788v1
- Date: Sun, 12 Jan 2025 12:02:26 GMT
- Title: How Low Can We Go? Minimizing Interaction Samples for Configurable Systems
- Authors: Dominik Krupke, Ahmad Moradi, Michael Perk, Phillip Keldenich, Gabriel Gehrke, Sebastian Krieter, Thomas Thüm, Sándor P. Fekete,
- Abstract summary: We present a framework for combining near-optimal solutions with provable lower bounds on the required sample size.<n>Our algorithm SampLNS can reliably find samples of smaller size than previous methods in 85% of the cases.<n>This makes it possible to avoid cumbersome efforts of minimizing samples by researchers as well as practitioners.
- Score: 2.569213261297365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern software systems are typically configurable, a fundamental prerequisite for wide applicability and reusability. This flexibility poses an extraordinary challenge for quality assurance, as the enormous number of possible configurations makes it impractical to test each of them separately. This is where t-wise interaction sampling can be used to systematically cover the configuration space and detect unknown feature interactions. Over the last two decades, numerous algorithms for computing small interaction samples have been studied, providing improvements for a range of heuristic results; nevertheless, it has remained unclear how much these results can still be improved. We present a significant breakthrough: a fundamental framework, based on the mathematical principle of duality, for combining near-optimal solutions with provable lower bounds on the required sample size. This implies that we no longer need to work on heuristics with marginal or no improvement, but can certify the solution quality by establishing a limit on the remaining gap; in many cases, we can even prove optimality of achieved solutions. This theoretical contribution also provides extensive practical improvements: Our algorithm SampLNS was tested on 47 small and medium-sized configurable systems from the existing literature. SampLNS can reliably find samples of smaller size than previous methods in 85% of the cases; moreover, we can achieve and prove optimality of solutions for 63% of all instances. This makes it possible to avoid cumbersome efforts of minimizing samples by researchers as well as practitioners, and substantially save testing resources for most configurable systems.
Related papers
- Towards Comprehensive Sampling of SMT Solutions [17.952635077005144]
PanSampler is a novel SMT sampler that achieves high coverage with a small number of solutions.<n>It incorporates three novel techniques, i.e., diversity-aware SMT algorithm, abstract syntax tree (AST)-guided scoring function and post-sampling optimization technology.<n>PanSampler exhibits a significantly stronger capability to reach high target coverage, while requiring fewer solutions than current samplers to achieve the same coverage level.
arXiv Detail & Related papers (2025-11-13T14:00:00Z) - Efficient Heuristics and Exact Methods for Pairwise Interaction Sampling [0.8242194776558897]
We consider a class of optimization problems that are fundamental to testing in modern software systems.<n>We are able to solve the largest instances in published benchmark sets to provable optimality.
arXiv Detail & Related papers (2025-10-07T14:11:28Z) - Neural Solver Selection for Combinatorial Optimization [23.449310200885893]
We propose the first general framework to coordinate neural solvers, which involves feature extraction, selection model, and selection strategy.
We show that the proposed framework can effectively distribute instances and the resulting composite solver can achieve significantly better performance.
arXiv Detail & Related papers (2024-10-13T02:05:41Z) - Faster Stochastic Variance Reduction Methods for Compositional MiniMax
Optimization [50.10952609321302]
compositional minimax optimization is a pivotal challenge across various machine learning domains.
Current methods of compositional minimax optimization are plagued by sub-optimal complexities or heavy reliance on sizable batch sizes.
This paper introduces a novel method, called Nested STOchastic Recursive Momentum (NSTORM), which can achieve the optimal sample complexity of $O(kappa3 /epsilon3 )$.
arXiv Detail & Related papers (2023-08-18T14:57:21Z) - Maximize to Explore: One Objective Function Fusing Estimation, Planning,
and Exploration [87.53543137162488]
We propose an easy-to-implement online reinforcement learning (online RL) framework called textttMEX.
textttMEX integrates estimation and planning components while balancing exploration exploitation automatically.
It can outperform baselines by a stable margin in various MuJoCo environments with sparse rewards.
arXiv Detail & Related papers (2023-05-29T17:25:26Z) - Best-Effort Adaptation [62.00856290846247]
We present a new theoretical analysis of sample reweighting methods, including bounds holding uniformly over the weights.
We show how these bounds can guide the design of learning algorithms that we discuss in detail.
We report the results of a series of experiments demonstrating the effectiveness of our best-effort adaptation and domain adaptation algorithms.
arXiv Detail & Related papers (2023-05-10T00:09:07Z) - Global and Preference-based Optimization with Mixed Variables using Piecewise Affine Surrogates [0.6083861980670925]
This paper proposes a novel surrogate-based global optimization algorithm to solve linearly constrained mixed-variable problems.<n>The proposed approach is based on constructing a piecewise affine surrogate of the objective function over feasible samples.<n>The two algorithms are evaluated on several unconstrained and constrained mixed-variable benchmark problems.
arXiv Detail & Related papers (2023-02-09T15:04:35Z) - Single-Trajectory Distributionally Robust Reinforcement Learning [21.955807398493334]
We propose Distributionally Robust RL (DRRL) to enhance performance across a range of environments.
Existing DRRL algorithms are either model-based or fail to learn from a single sample trajectory.
We design a first fully model-free DRRL algorithm, called distributionally robust Q-learning with single trajectory (DRQ)
arXiv Detail & Related papers (2023-01-27T14:08:09Z) - Learning Sampling Distributions for Model Predictive Control [36.82905770866734]
Sampling-based approaches to Model Predictive Control (MPC) have become a cornerstone of contemporary approaches to MPC.
We propose to carry out all operations in the latent space, allowing us to take full advantage of the learned distribution.
Specifically, we frame the learning problem as bi-level optimization and show how to train the controller with backpropagation-through-time.
arXiv Detail & Related papers (2022-12-05T20:35:36Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Bloom Origami Assays: Practical Group Testing [90.2899558237778]
Group testing is a well-studied problem with several appealing solutions.
Recent biological studies impose practical constraints for COVID-19 that are incompatible with traditional methods.
We develop a new method combining Bloom filters with belief propagation to scale to larger values of n (more than 100) with good empirical results.
arXiv Detail & Related papers (2020-07-21T19:31:41Z) - Fast and stable MAP-Elites in noisy domains using deep grids [1.827510863075184]
Deep-Grid MAP-Elites is a variant of the MAP-Elites algorithm that uses an archive of similar previously encountered solutions to approximate the performance of a solution.
We show that this simple approach is significantly more resilient to noise on the behavioural descriptors, while achieving competitive performances in terms of fitness optimisation.
arXiv Detail & Related papers (2020-06-25T08:47:23Z) - The Simulator: Understanding Adaptive Sampling in the
Moderate-Confidence Regime [52.38455827779212]
We propose a novel technique for analyzing adaptive sampling called the em Simulator.
We prove the first instance-based lower bounds the top-k problem which incorporate the appropriate log-factors.
Our new analysis inspires a simple and near-optimal for the best-arm and top-k identification, the first em practical of its kind for the latter problem.
arXiv Detail & Related papers (2017-02-16T23:42:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.