Anytime Cooperative Implicit Hitting Set Solving
- URL: http://arxiv.org/abs/2501.07896v1
- Date: Tue, 14 Jan 2025 07:23:52 GMT
- Title: Anytime Cooperative Implicit Hitting Set Solving
- Authors: Emma Rollón, Javier Larrosa, Aleksandra Petrova,
- Abstract summary: The Implicit Hitting Set (HS) approach has shown to be very effective for MaxSAT, Pseudo-boolean optimization and other frameworks.
We show how it can be easily combined in a multithread architecture where cores discovered by either component are available.
We show that the resulting algorithm (HS-lub) is consistently superior to either HS-lb and HS-ub in isolation.
- Score: 46.010796136659536
- License:
- Abstract: The Implicit Hitting Set (HS) approach has shown to be very effective for MaxSAT, Pseudo-boolean optimization and other boolean frameworks. Very recently, it has also shown its potential in the very similar Weighted CSP framework by means of the so-called cost-function merging. The original formulation of the HS approach focuses on obtaining increasingly better lower bounds (HS-lb). However, and as shown for Pseudo-Boolean Optimization, this approach can also be adapted to compute increasingly better upper bounds (HS-ub). In this paper we consider both HS approaches and show how they can be easily combined in a multithread architecture where cores discovered by either component are available by the other which, interestingly, generates synergy between them. We show that the resulting algorithm (HS-lub) is consistently superior to either HS-lb and HS-ub in isolation. Most importantly, HS-lub has an effective anytime behaviour with which the optimality gap is reduced during the execution. We tested our approach on the Weighted CSP framework and show on three different benchmarks that our very simple implementation sometimes outperforms the parallel hybrid best-first search implementation of the far more developed state-of-the-art Toulbar2.
Related papers
- Scalable First-order Method for Certifying Optimal k-Sparse GLMs [9.613635592922174]
We propose a first-order proximal gradient algorithm to solve the perspective relaxation of the problem within a BnB framework.
We show that our approach significantly accelerates dual bound computations and is highly effective in providing optimality certificates for large-scale problems.
arXiv Detail & Related papers (2025-02-13T17:14:18Z) - Empirical Evaluation of the Implicit Hitting Set Approach for Weighted CSPs [46.010796136659536]
We explore some alternatives to the existing algorithm of reference for the weighted CSP problem.
Our empirical study shows that for WCSP it is not easy to identify the best alternative.
arXiv Detail & Related papers (2025-01-13T15:59:28Z) - LLaMA-Berry: Pairwise Optimization for O1-like Olympiad-Level Mathematical Reasoning [56.273799410256075]
The framework combines Monte Carlo Tree Search (MCTS) with iterative Self-Refine to optimize the reasoning path.
The framework has been tested on general and advanced benchmarks, showing superior performance in terms of search efficiency and problem-solving capability.
arXiv Detail & Related papers (2024-10-03T18:12:29Z) - CEBin: A Cost-Effective Framework for Large-Scale Binary Code Similarity
Detection [23.8834126695488]
Binary code similarity detection (BCSD) is a fundamental technique for various application.
We propose a cost-effective BCSD framework, CEBin, which fuses embedding-based and comparison-based approaches.
arXiv Detail & Related papers (2024-02-29T03:02:07Z) - FlexHB: a More Efficient and Flexible Framework for Hyperparameter
Optimization [4.127081624438282]
We propose FlexHB, a new method pushing multi-fidelity BO to the limit and re-designing a framework for early stopping with Successive Halving(SH)
Our method achieves superior efficiency and outperforms other methods on various HPO tasks.
arXiv Detail & Related papers (2024-02-21T09:18:59Z) - ALEXR: An Optimal Single-Loop Algorithm for Convex Finite-Sum Coupled Compositional Stochastic Optimization [53.14532968909759]
We introduce an efficient single-loop primal-dual block-coordinate algorithm, dubbed ALEXR.
We establish the convergence rates of ALEXR in both convex and strongly convex cases under smoothness and non-smoothness conditions.
We present lower complexity bounds to demonstrate that the convergence rates of ALEXR are optimal among first-order block-coordinate algorithms for the considered class of cFCCO problems.
arXiv Detail & Related papers (2023-12-04T19:00:07Z) - COPS: Controlled Pruning Before Training Starts [68.8204255655161]
State-of-the-art deep neural network (DNN) pruning techniques, applied one-shot before training starts, evaluate sparse architectures with the help of a single criterion -- called pruning score.
In this work we do not concentrate on a single pruning criterion, but provide a framework for combining arbitrary GSSs to create more powerful pruning strategies.
arXiv Detail & Related papers (2021-07-27T08:48:01Z) - Improved Branch and Bound for Neural Network Verification via Lagrangian
Decomposition [161.09660864941603]
We improve the scalability of Branch and Bound (BaB) algorithms for formally proving input-output properties of neural networks.
We present a novel activation-based branching strategy and a BaB framework, named Branch and Dual Network Bound (BaDNB)
BaDNB outperforms previous complete verification systems by a large margin, cutting average verification times by factors up to 50 on adversarial properties.
arXiv Detail & Related papers (2021-04-14T09:22:42Z) - Comparing and Combining Approximate Computing Frameworks [0.0]
VIPER and BOA show how approximation frameworks can be compared and combined to create larger, richer trade-off spaces.
We use VIPER and BOA to compare and combine three different approximation frameworks from across the system stack.
arXiv Detail & Related papers (2021-02-16T04:52:43Z) - Fast Objective & Duality Gap Convergence for Non-Convex Strongly-Concave
Min-Max Problems with PL Condition [52.08417569774822]
This paper focuses on methods for solving smooth non-concave min-max problems, which have received increasing attention due to deep learning (e.g., deep AUC)
arXiv Detail & Related papers (2020-06-12T00:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.