Empirical Evaluation of the Implicit Hitting Set Approach for Weighted CSPs
- URL: http://arxiv.org/abs/2501.07432v1
- Date: Mon, 13 Jan 2025 15:59:28 GMT
- Title: Empirical Evaluation of the Implicit Hitting Set Approach for Weighted CSPs
- Authors: Aleksandra Petrova, Javier Larrosa, Emma Rollón,
- Abstract summary: We explore some alternatives to the existing algorithm of reference for the weighted CSP problem.
Our empirical study shows that for WCSP it is not easy to identify the best alternative.
- Score: 46.010796136659536
- License:
- Abstract: SAT technology has proven to be surprisingly effective in a large variety of domains. However, for the Weighted CSP problem dedicated algorithms have always been superior. One approach not well-studied so far is the use of SAT in conjunction with the Implicit Hitting Set approach. In this work, we explore some alternatives to the existing algorithm of reference. The alternatives, mostly borrowed from related boolean frameworks, consider trade-offs for the two main components of the IHS approach: the computation of low-cost hitting vectors, and their transformation into high-cost cores. For each one, we propose 4 levels of intensity. Since we also test the usefulness of cost function merging, our experiments consider 32 different implementations. Our empirical study shows that for WCSP it is not easy to identify the best alternative. Nevertheless, the cost-function merging encoding and extracting maximal cores seems to be a robust approach.
Related papers
- Anytime Cooperative Implicit Hitting Set Solving [46.010796136659536]
The Implicit Hitting Set (HS) approach has shown to be very effective for MaxSAT, Pseudo-boolean optimization and other frameworks.
We show how it can be easily combined in a multithread architecture where cores discovered by either component are available.
We show that the resulting algorithm (HS-lub) is consistently superior to either HS-lb and HS-ub in isolation.
arXiv Detail & Related papers (2025-01-14T07:23:52Z) - Optimal Multi-Fidelity Best-Arm Identification [65.23078799972188]
In bandit best-arm identification, an algorithm is tasked with finding the arm with highest mean reward with a specified accuracy as fast as possible.
We study multi-fidelity best-arm identification, in which the can choose to sample an arm at a lower fidelity (less accurate mean estimate) for a lower cost.
Several methods have been proposed for tackling this problem, but their optimality remain elusive, notably due to loose lower bounds on the total cost needed to identify the best arm.
arXiv Detail & Related papers (2024-06-05T08:02:40Z) - Stochastic Average Gradient : A Simple Empirical Investigation [0.0]
Average gradient (SAG) is a method for optimizing the sum of a finite number of smooth functions.
SAG converges faster than other iterations on simple toy problems and performs better than many other iterations on simple machine learning problems.
We also propose a combination of SAG with the momentum algorithm and Adam.
arXiv Detail & Related papers (2023-07-27T17:34:26Z) - Efficient computation of the Knowledge Gradient for Bayesian
Optimization [1.0497128347190048]
One-shot Hybrid KG is a new approach that combines several of the previously proposed ideas and is cheap to compute as well as powerful and efficient.
All experiments are implemented in BOTorch and show empirically drastically reduced computational overhead with equal or improved performance.
arXiv Detail & Related papers (2022-09-30T10:39:38Z) - Large-scale Optimization of Partial AUC in a Range of False Positive
Rates [51.12047280149546]
The area under the ROC curve (AUC) is one of the most widely used performance measures for classification models in machine learning.
We develop an efficient approximated gradient descent method based on recent practical envelope smoothing technique.
Our proposed algorithm can also be used to minimize the sum of some ranked range loss, which also lacks efficient solvers.
arXiv Detail & Related papers (2022-03-03T03:46:18Z) - Revisiting Bayesian Optimization in the light of the COCO benchmark [1.4467794332678539]
This article reports a large investigation about the effects on the performance of (Gaussian process based) BO of common and less common design choices.
The code developed for this study makes the new version (v2.1.1) of the R package DiceOptim available on CRAN.
arXiv Detail & Related papers (2021-03-30T19:45:18Z) - Correcting Momentum with Second-order Information [50.992629498861724]
We develop a new algorithm for non-critical optimization that finds an $O(epsilon)$epsilon point in the optimal product.
We validate our results on a variety of large-scale deep learning benchmarks and architectures.
arXiv Detail & Related papers (2021-03-04T19:01:20Z) - Constraint-Handling Techniques for Particle Swarm Optimization
Algorithms [0.0]
Population-based methods can cope with a variety of different problems, including problems of remarkably higher complexity than those traditional methods can handle.
The aim here is to develop and compare different CHTs suitable for PSOs, which are incorporated to an algorithm with general-purpose settings.
arXiv Detail & Related papers (2021-01-25T01:49:10Z) - Byzantine-Resilient Non-Convex Stochastic Gradient Descent [61.6382287971982]
adversary-resilient distributed optimization, in which.
machines can independently compute gradients, and cooperate.
Our algorithm is based on a new concentration technique, and its sample complexity.
It is very practical: it improves upon the performance of all prior methods when no.
setting machines are present.
arXiv Detail & Related papers (2020-12-28T17:19:32Z) - Large-Scale Methods for Distributionally Robust Optimization [53.98643772533416]
We prove that our algorithms require a number of evaluations gradient independent of training set size and number of parameters.
Experiments on MNIST and ImageNet confirm the theoretical scaling of our algorithms, which are 9--36 times more efficient than full-batch methods.
arXiv Detail & Related papers (2020-10-12T17:41:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.