Hybrid DCOP Solvers: Boosting Performance of Local Search Algorithms
- URL: http://arxiv.org/abs/2009.02240v1
- Date: Fri, 4 Sep 2020 15:17:24 GMT
- Title: Hybrid DCOP Solvers: Boosting Performance of Local Search Algorithms
- Authors: Cornelis Jan van Leeuwen and Przemyz{\l}aw Pawe{\l}czak
- Abstract summary: We propose a novel method for expediting both symmetric and asymmetric Distributed Constraint Optimization Problem (DCOP) solvers.
The core idea is based on initializing DCOP solvers with greedy fast non-iterative DCOP solvers.
We show that changing the starting conditions of existing DCOP solvers not only reduces the algorithm convergence time by up to 50%, but also reduces the communication overhead and leads to a better solution quality.
- Score: 0.6853165736531939
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel method for expediting both symmetric and asymmetric
Distributed Constraint Optimization Problem (DCOP) solvers. The core idea is
based on initializing DCOP solvers with greedy fast non-iterative DCOP solvers.
This is contrary to existing methods where initialization is always achieved
using a random value assignment. We empirically show that changing the starting
conditions of existing DCOP solvers not only reduces the algorithm convergence
time by up to 50\%, but also reduces the communication overhead and leads to a
better solution quality. We show that this effect is due to structural
improvements in the variable assignment, which is caused by the spreading
pattern of DCOP algorithm activation.) /Subject (Hybrid DCOPs)
Related papers
- Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment [81.84950252537618]
This paper reveals a unified game-theoretic connection between iterative BOND and self-play alignment.
We establish a novel framework, WIN rate Dominance (WIND), with a series of efficient algorithms for regularized win rate dominance optimization.
arXiv Detail & Related papers (2024-10-28T04:47:39Z) - Fully First-Order Methods for Decentralized Bilevel Optimization [17.20330936572045]
This paper focuses on decentralized bilevel optimization (DSBO) where agents only communicate with their neighbors.
We propose Decentralized Gradient Descent and Ascent with Gradient Tracking (DSGDA-GT), a novel algorithm that only requires first-order oracles that are much cheaper than second-order oracles widely adopted in existing works.
arXiv Detail & Related papers (2024-10-25T06:11:43Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - DR-DSGD: A Distributionally Robust Decentralized Learning Algorithm over
Graphs [54.08445874064361]
We propose to solve a regularized distributionally robust learning problem in the decentralized setting.
By adding a Kullback-Liebler regularization function to the robust min-max optimization problem, the learning problem can be reduced to a modified robust problem.
We show that our proposed algorithm can improve the worst distribution test accuracy by up to $10%$.
arXiv Detail & Related papers (2022-08-29T18:01:42Z) - Pretrained Cost Model for Distributed Constraint Optimization Problems [37.79733538931925]
Distributed Constraint Optimization Problems (DCOPs) are an important subclass of optimization problems.
We propose a novel directed acyclic graph schema representation for DCOPs and leverage the Graph Attention Networks (GATs) to embed graph representations.
Our model, GAT-PCM, is then pretrained with optimally labelled data in an offline manner, so as to boost a broad range of DCOP algorithms.
arXiv Detail & Related papers (2021-12-08T09:24:10Z) - Distributed stochastic optimization with large delays [59.95552973784946]
One of the most widely used methods for solving large-scale optimization problems is distributed asynchronous gradient descent (DASGD)
We show that DASGD converges to a global optimal implementation model under same delay assumptions.
arXiv Detail & Related papers (2021-07-06T21:59:49Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - A Particle Swarm Inspired Approach for Continuous Distributed Constraint
Optimization Problems [7.512486812178571]
Continuous DCOPs can explicitly model problems with continuous variables.
State-of-the-art approaches for solving C-DCOPs experience either onerous memory or computation overhead.
We propose a new C-DCOP algorithm, namely Particle Swarm Optimization Based C-DCOP (PCD), which is inspired by Particle Swarm Optimization (PSO)
arXiv Detail & Related papers (2020-10-20T11:04:47Z) - On Population-Based Algorithms for Distributed Constraint Optimization
Problems [12.21350091202884]
We study an emerging class of incomplete algorithms that are broadly termed as population-based algorithms.
Our first approach, Anytime Evolutionary DCOP or AED, exploits evolutionary optimization meta-heuristics to solve DCOPs.
While in our second contribution, we show that population-based approaches can be combined with local search approaches.
arXiv Detail & Related papers (2020-09-02T06:39:30Z) - AP-Loss for Accurate One-Stage Object Detection [49.13608882885456]
One-stage object detectors are trained by optimizing classification-loss and localization-loss simultaneously.
The former suffers much from extreme foreground-background imbalance due to the large number of anchors.
This paper proposes a novel framework to replace the classification task in one-stage detectors with a ranking task.
arXiv Detail & Related papers (2020-08-17T13:22:01Z) - Learning Optimal Temperature Region for Solving Mixed Integer Functional
DCOPs [26.16778095954815]
We combine Distributed Constraint Optimization Problems (DCOPs) with Functional DCOPs (F-DCOPs)
We then propose a novel algorithm $-$ Distributed Parallel Simulated Annealing (DPSA)
We show that DPSA produces solutions of significantly better quality than the state-of-the-art non-exact algorithms in their corresponding settings.
arXiv Detail & Related papers (2020-02-27T09:46:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.