A 2-opt Algorithm for Locally Optimal Set Partition Optimization
- URL: http://arxiv.org/abs/2303.08219v1
- Date: Tue, 14 Mar 2023 20:25:56 GMT
- Title: A 2-opt Algorithm for Locally Optimal Set Partition Optimization
- Authors: Kaan Gokcesu, Hakan Gokcesu
- Abstract summary: We develop an algorithm that can generate a locally optimal solution in at most $O(N2)$ time and $O(N)$ space.
Our algorithm can handle arbitrary input precisions and does not require positive or integer inputs.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Our research deals with the optimization version of the set partition
problem, where the objective is to minimize the absolute difference between the
sums of the two disjoint partitions. Although this problem is known to be
NP-hard and requires exponential time to solve, we propose a less demanding
version of this problem where the goal is to find a locally optimal solution.
In our approach, we consider the local optimality in respect to any movement of
at most two elements. To accomplish this, we developed an algorithm that can
generate a locally optimal solution in at most $O(N^2)$ time and $O(N)$ space.
Our algorithm can handle arbitrary input precisions and does not require
positive or integer inputs. Hence, it can be applied in various problem
scenarios with ease.
Related papers
- Learning Multiple Initial Solutions to Optimization Problems [52.9380464408756]
Sequentially solving similar optimization problems under strict runtime constraints is essential for many applications.
We propose learning to predict emphmultiple diverse initial solutions given parameters that define the problem instance.
We find significant and consistent improvement with our method across all evaluation settings and demonstrate that it efficiently scales with the number of initial solutions required.
arXiv Detail & Related papers (2024-11-04T15:17:19Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - An Algorithm with Optimal Dimension-Dependence for Zero-Order Nonsmooth Nonconvex Stochastic Optimization [37.300102993926046]
We study the complexity of producing neither smooth nor convex points of Lipschitz objectives which are possibly using only zero-order evaluations.
Our analysis is based on a simple yet powerful.
Goldstein-subdifferential set, which allows recent advancements in.
nonsmooth non optimization.
arXiv Detail & Related papers (2023-07-10T11:56:04Z) - A Linearithmic Time Locally Optimal Algorithm for the Multiway Number
Partition Optimization [0.0]
We study the problem of multiway number partition optimization.
We propose a linearithmic time complexity $O(Nlog N)$ algorithm that can produce such a locally optimal solution.
arXiv Detail & Related papers (2022-03-10T20:07:21Z) - A Quadratic Time Locally Optimal Algorithm for NP-hard Equal Cardinality
Partition Optimization [0.0]
We study the optimization version of the equal cardinality set partition problem (where the absolute difference between the equal sized partitions' sums are minimized)
Our approach does not require positive or integer inputs and works equally well under arbitrary input precisions.
arXiv Detail & Related papers (2021-09-16T11:25:18Z) - Efficient Locally Optimal Number Set Partitioning for Scheduling,
Allocation and Fair Selection [0.0]
We show that our proposed algorithms can find a locally optimal solution in near linear time.
Our algorithms require neither positive nor integer elements in the input set, hence, they are more widely applicable.
arXiv Detail & Related papers (2021-09-10T11:53:34Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Instance Specific Approximations for Submodular Maximization [45.91235224228292]
We look for methods to benchmark the performance of algorithms against optimal solutions on real-world instances.
A major question is how to measure the performance of an algorithm in comparison to an optimal solution on instances we encounter in practice.
Our main contribution is not a new algorithm for submodular minimization but an analytical method that measures how close an algorithm for submodular instance is to optimal.
arXiv Detail & Related papers (2021-02-23T19:39:32Z) - A Two-Timescale Framework for Bilevel Optimization: Complexity Analysis
and Application to Actor-Critic [142.1492359556374]
Bilevel optimization is a class of problems which exhibit a two-level structure.
We propose a two-timescale approximation (TTSA) algorithm for tackling such a bilevel problem.
We show that a two-timescale natural actor-critic policy optimization algorithm can be viewed as a special case of our TTSA framework.
arXiv Detail & Related papers (2020-07-10T05:20:02Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z) - Second-order Conditional Gradient Sliding [79.66739383117232]
We present the emphSecond-Order Conditional Gradient Sliding (SOCGS) algorithm.
The SOCGS algorithm converges quadratically in primal gap after a finite number of linearly convergent iterations.
It is useful when the feasible region can only be accessed efficiently through a linear optimization oracle.
arXiv Detail & Related papers (2020-02-20T17:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.