Boosting Data Reduction for the Maximum Weight Independent Set Problem
Using Increasing Transformations
- URL: http://arxiv.org/abs/2008.05180v2
- Date: Thu, 13 Aug 2020 05:45:23 GMT
- Title: Boosting Data Reduction for the Maximum Weight Independent Set Problem
Using Increasing Transformations
- Authors: Alexander Gellner, Sebastian Lamm, Christian Schulz, Darren Strash,
Bogd\'an Zav\'alnij
- Abstract summary: We introduce new generalized data reduction and transformation rules for the maximum weight independent set problem.
Surprisingly, these so-called increasing transformations can simplify the problem and also open up the reduction space to yield even smaller irreducible graphs later in the algorithm.
Our algorithm computes significantly smaller irreducible graphs on all except one instance, solves more instances to optimality than previously possible, is up to two orders of magnitude faster than the best state-of-the-art solver, and finds higher-quality solutions than solvers DynWVC and HILS.
- Score: 59.84561168501493
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given a vertex-weighted graph, the maximum weight independent set problem
asks for a pair-wise non-adjacent set of vertices such that the sum of their
weights is maximum. The branch-and-reduce paradigm is the de facto standard
approach to solve the problem to optimality in practice. In this paradigm, data
reduction rules are applied to decrease the problem size. These data reduction
rules ensure that given an optimum solution on the new (smaller) input, one can
quickly construct an optimum solution on the original input.
We introduce new generalized data reduction and transformation rules for the
problem. A key feature of our work is that some transformation rules can
increase the size of the input. Surprisingly, these so-called increasing
transformations can simplify the problem and also open up the reduction space
to yield even smaller irreducible graphs later throughout the algorithm. In
experiments, our algorithm computes significantly smaller irreducible graphs on
all except one instance, solves more instances to optimality than previously
possible, is up to two orders of magnitude faster than the best
state-of-the-art solver, and finds higher-quality solutions than heuristic
solvers DynWVC and HILS on many instances. While the increasing transformations
are only efficient enough for preprocessing at this time, we see this as a
critical initial step towards a new branch-and-transform paradigm.
Related papers
- Fast Screening Rules for Optimal Design via Quadratic Lasso
Reformulation [0.135975510645475]
In this work, we derive safe screening rules that can be used to discard inessential samples.
The new tests are much faster to compute, especially for problems involving a parameter space of high dimension.
We show how an existing homotopy algorithm to compute the regularization path of the lasso method can be reparametrized with respect to the squared $ell_$-penalty.
arXiv Detail & Related papers (2023-10-13T08:10:46Z) - Decentralized gradient descent maximization method for composite
nonconvex strongly-concave minimax problems [7.5573375809946395]
We make the first attempt on solving NCSC minimax problems that can focus on both stationary nonsmooth terms.
Our algorithm is designed based on a novel reformulation of the minimax problem.
arXiv Detail & Related papers (2023-04-05T13:54:43Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Smooth over-parameterized solvers for non-smooth structured optimization [3.756550107432323]
Non-smoothness encodes structural constraints on the solutions, such as sparsity, group sparsity, low-rank edges and sharp edges.
We operate a non-weighted but smooth overparametrization of the underlying nonsmooth optimization problems.
Our main contribution is to apply the Variable Projection (VarPro) which defines a new formulation by explicitly minimizing over part of the variables.
arXiv Detail & Related papers (2022-05-03T09:23:07Z) - A framework for bilevel optimization that enables stochastic and global
variance reduction algorithms [17.12280360174073]
Bilevel optimization is a problem of minimizing a value function which involves the arg-minimum of another function.
We introduce a novel framework, in which the solution of the inner problem, the solution of the linear system, and the main variable evolve at the same time.
We demonstrate that SABA, an adaptation of the celebrated SAGA algorithm in our framework, has $O(frac1T)$ convergence rate, and that it achieves linear convergence under Polyak-Lojasciewicz assumption.
arXiv Detail & Related papers (2022-01-31T18:17:25Z) - Mean Field Approximation for solving QUBO problems [0.0]
We show that the statistical physics approach and the quantum mechanical approach in the mean field annealing give the same result.
Our methods consist of a set of simple gradient-based minimizations with continuous variables, thus easy to simulate.
arXiv Detail & Related papers (2021-06-06T20:35:28Z) - Aligning Partially Overlapping Point Sets: an Inner Approximation
Algorithm [80.15123031136564]
We propose a robust method to align point sets where there is no prior information about the value of the transformation.
Our algorithm does not need regularization on transformation, and thus can handle the situation where there is no prior information about the values of the transformations.
Experimental results demonstrate the better robustness of the proposed method over state-of-the-art algorithms.
arXiv Detail & Related papers (2020-07-05T15:23:33Z) - Balancing Rates and Variance via Adaptive Batch-Size for Stochastic
Optimization Problems [120.21685755278509]
In this work, we seek to balance the fact that attenuating step-size is required for exact convergence with the fact that constant step-size learns faster in time up to an error.
Rather than fixing the minibatch the step-size at the outset, we propose to allow parameters to evolve adaptively.
arXiv Detail & Related papers (2020-07-02T16:02:02Z) - Gradient Free Minimax Optimization: Variance Reduction and Faster
Convergence [120.9336529957224]
In this paper, we denote the non-strongly setting on the magnitude of a gradient-free minimax optimization problem.
We show that a novel zeroth-order variance reduced descent algorithm achieves the best known query complexity.
arXiv Detail & Related papers (2020-06-16T17:55:46Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.