A Particle Swarm Inspired Approach for Continuous Distributed Constraint
Optimization Problems
- URL: http://arxiv.org/abs/2010.10192v1
- Date: Tue, 20 Oct 2020 11:04:47 GMT
- Title: A Particle Swarm Inspired Approach for Continuous Distributed Constraint
Optimization Problems
- Authors: Moumita Choudhury, Amit Sarker, Md. Mosaddek Khan, William Yeoh
- Abstract summary: Continuous DCOPs can explicitly model problems with continuous variables.
State-of-the-art approaches for solving C-DCOPs experience either onerous memory or computation overhead.
We propose a new C-DCOP algorithm, namely Particle Swarm Optimization Based C-DCOP (PCD), which is inspired by Particle Swarm Optimization (PSO)
- Score: 7.512486812178571
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distributed Constraint Optimization Problems (DCOPs) are a widely studied
framework for coordinating interactions in cooperative multi-agent systems. In
classical DCOPs, variables owned by agents are assumed to be discrete. However,
in many applications, such as target tracking or sleep scheduling in sensor
networks, continuous-valued variables are more suitable than discrete ones. To
better model such applications, researchers have proposed Continuous DCOPs
(C-DCOPs), an extension of DCOPs, that can explicitly model problems with
continuous variables. The state-of-the-art approaches for solving C-DCOPs
experience either onerous memory or computation overhead and unsuitable for
non-differentiable optimization problems. To address this issue, we propose a
new C-DCOP algorithm, namely Particle Swarm Optimization Based C-DCOP (PCD),
which is inspired by Particle Swarm Optimization (PSO), a well-known
centralized population-based approach for solving continuous optimization
problems. In recent years, population-based algorithms have gained significant
attention in classical DCOPs due to their ability in producing high-quality
solutions. Nonetheless, to the best of our knowledge, this class of algorithms
has not been utilized to solve C-DCOPs and there has been no work evaluating
the potential of PSO in solving classical DCOPs or C-DCOPs. In light of this
observation, we adapted PSO, a centralized algorithm, to solve C-DCOPs in a
decentralized manner. The resulting PCD algorithm not only produces
good-quality solutions but also finds solutions without any requirement for
derivative calculations. Moreover, we design a crossover operator that can be
used by PCD to further improve the quality of solutions found. Finally, we
theoretically prove that PCD is an anytime algorithm and empirically evaluate
PCD against the state-of-the-art C-DCOP algorithms in a wide variety of
benchmarks.
Related papers
- Provably Efficient UCB-type Algorithms For Learning Predictive State
Representations [55.00359893021461]
The sequential decision-making problem is statistically learnable if it admits a low-rank structure modeled by predictive state representations (PSRs)
This paper proposes the first known UCB-type approach for PSRs, featuring a novel bonus term that upper bounds the total variation distance between the estimated and true models.
In contrast to existing approaches for PSRs, our UCB-type algorithms enjoy computational tractability, last-iterate guaranteed near-optimal policy, and guaranteed model accuracy.
arXiv Detail & Related papers (2023-07-01T18:35:21Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Neural Stochastic Dual Dynamic Programming [99.80617899593526]
We introduce a trainable neural model that learns to map problem instances to a piece-wise linear value function.
$nu$-SDDP can significantly reduce problem solving cost without sacrificing solution quality.
arXiv Detail & Related papers (2021-12-01T22:55:23Z) - Two-Stage Stochastic Optimization via Primal-Dual Decomposition and Deep
Unrolling [86.85697555068168]
Two-stage algorithmic optimization plays a critical role in various engineering and scientific applications.
There still lack efficient algorithms, especially when the long-term and short-term variables are coupled in the constraints.
We show that PDD-SSCA can achieve superior performance over existing solutions.
arXiv Detail & Related papers (2021-05-05T03:36:00Z) - Improving Solution Quality of Bounded Max-Sum Algorithm to Solve DCOPs
involving Hard and Soft Constraints [3.0969191504482243]
Bounded Max-Sum (BMS) is a message-passing algorithm that provides approximation solution to a specific form of de-centralized coordination problems.
In particular, BMS algorithm is able to solve problems of this type having large search space at the expense of low computational cost.
arXiv Detail & Related papers (2020-12-02T18:10:14Z) - Hybrid DCOP Solvers: Boosting Performance of Local Search Algorithms [0.6853165736531939]
We propose a novel method for expediting both symmetric and asymmetric Distributed Constraint Optimization Problem (DCOP) solvers.
The core idea is based on initializing DCOP solvers with greedy fast non-iterative DCOP solvers.
We show that changing the starting conditions of existing DCOP solvers not only reduces the algorithm convergence time by up to 50%, but also reduces the communication overhead and leads to a better solution quality.
arXiv Detail & Related papers (2020-09-04T15:17:24Z) - On Population-Based Algorithms for Distributed Constraint Optimization
Problems [12.21350091202884]
We study an emerging class of incomplete algorithms that are broadly termed as population-based algorithms.
Our first approach, Anytime Evolutionary DCOP or AED, exploits evolutionary optimization meta-heuristics to solve DCOPs.
While in our second contribution, we show that population-based approaches can be combined with local search approaches.
arXiv Detail & Related papers (2020-09-02T06:39:30Z) - A Dynamical Systems Approach for Convergence of the Bayesian EM
Algorithm [59.99439951055238]
We show how (discrete-time) Lyapunov stability theory can serve as a powerful tool to aid, or even lead, in the analysis (and potential design) of optimization algorithms that are not necessarily gradient-based.
The particular ML problem that this paper focuses on is that of parameter estimation in an incomplete-data Bayesian framework via the popular optimization algorithm known as maximum a posteriori expectation-maximization (MAP-EM)
We show that fast convergence (linear or quadratic) is achieved, which could have been difficult to unveil without our adopted S&C approach.
arXiv Detail & Related papers (2020-06-23T01:34:18Z) - C-CoCoA: A Continuous Cooperative Constraint Approximation Algorithm to
Solve Functional DCOPs [4.404507236193031]
This paper applies continuous non-linear optimization methods on Cooperative Constraint Approximation (CoCoA) algorithm.
Our algorithm is able to provide high-quality solutions at the expense of smaller communication cost and execution time.
arXiv Detail & Related papers (2020-02-27T20:44:25Z) - Learning Optimal Temperature Region for Solving Mixed Integer Functional
DCOPs [26.16778095954815]
We combine Distributed Constraint Optimization Problems (DCOPs) with Functional DCOPs (F-DCOPs)
We then propose a novel algorithm $-$ Distributed Parallel Simulated Annealing (DPSA)
We show that DPSA produces solutions of significantly better quality than the state-of-the-art non-exact algorithms in their corresponding settings.
arXiv Detail & Related papers (2020-02-27T09:46:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.