Learning Optimal Temperature Region for Solving Mixed Integer Functional
DCOPs
- URL: http://arxiv.org/abs/2002.12001v2
- Date: Wed, 2 Sep 2020 06:17:37 GMT
- Title: Learning Optimal Temperature Region for Solving Mixed Integer Functional
DCOPs
- Authors: Saaduddin Mahmud, Md. Mosaddek Khan, Moumita Choudhury, Long
Tran-Thanh and Nicholas R. Jennings
- Abstract summary: We combine Distributed Constraint Optimization Problems (DCOPs) with Functional DCOPs (F-DCOPs)
We then propose a novel algorithm $-$ Distributed Parallel Simulated Annealing (DPSA)
We show that DPSA produces solutions of significantly better quality than the state-of-the-art non-exact algorithms in their corresponding settings.
- Score: 26.16778095954815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distributed Constraint Optimization Problems (DCOPs) are an important
framework for modeling coordinated decision-making problems in multi-agent
systems with a set of discrete variables. Later works have extended DCOPs to
model problems with a set of continuous variables, named Functional DCOPs
(F-DCOPs). In this paper, we combine both of these frameworks into the Mixed
Integer Functional DCOP (MIF-DCOP) framework that can deal with problems
regardless of their variables' type. We then propose a novel algorithm $-$
Distributed Parallel Simulated Annealing (DPSA), where agents cooperatively
learn the optimal parameter configuration for the algorithm while also solving
the given problem using the learned knowledge. Finally, we empirically evaluate
our approach in DCOP, F-DCOP, and MIF-DCOP settings and show that DPSA produces
solutions of significantly better quality than the state-of-the-art non-exact
algorithms in their corresponding settings.
Related papers
- Sample-Efficient Multi-Agent RL: An Optimization Perspective [103.35353196535544]
We study multi-agent reinforcement learning (MARL) for the general-sum Markov Games (MGs) under the general function approximation.
We introduce a novel complexity measure called the Multi-Agent Decoupling Coefficient (MADC) for general-sum MGs.
We show that our algorithm provides comparable sublinear regret to the existing works.
arXiv Detail & Related papers (2023-10-10T01:39:04Z) - DIMES: A Differentiable Meta Solver for Combinatorial Optimization
Problems [41.57773395100222]
Deep reinforcement learning (DRL) models have shown promising results in solving NP-hard Combinatorial Optimization problems.
This paper addresses the scalability challenge in large-scale optimization by proposing a novel approach, namely, DIMES.
Unlike previous DRL methods which suffer from costly autoregressive decoding or iterative refinements of discrete solutions, DIMES introduces a compact continuous space for parameterizing the underlying distribution of candidate solutions.
Extensive experiments show that DIMES outperforms recent DRL-based methods on large benchmark datasets for Traveling Salesman Problems and Maximal Independent Set problems.
arXiv Detail & Related papers (2022-10-08T23:24:37Z) - On a class of geodesically convex optimization problems solved via
Euclidean MM methods [50.428784381385164]
We show how a difference of Euclidean convexization functions can be written as a difference of different types of problems in statistics and machine learning.
Ultimately, we helps the broader broader the broader the broader the broader the work.
arXiv Detail & Related papers (2022-06-22T23:57:40Z) - Neural Stochastic Dual Dynamic Programming [99.80617899593526]
We introduce a trainable neural model that learns to map problem instances to a piece-wise linear value function.
$nu$-SDDP can significantly reduce problem solving cost without sacrificing solution quality.
arXiv Detail & Related papers (2021-12-01T22:55:23Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Two-Stage Stochastic Optimization via Primal-Dual Decomposition and Deep
Unrolling [86.85697555068168]
Two-stage algorithmic optimization plays a critical role in various engineering and scientific applications.
There still lack efficient algorithms, especially when the long-term and short-term variables are coupled in the constraints.
We show that PDD-SSCA can achieve superior performance over existing solutions.
arXiv Detail & Related papers (2021-05-05T03:36:00Z) - Ensemble Feature Extraction for Multi-Container Quality-Diversity
Algorithms [0.2741266294612775]
Quality-Diversity algorithms search for large collections of diverse and high-performing solutions.
We describe MC-AURORA, a Quality-Diversity approach that optimises simultaneously several collections of solutions.
We show that this approach produces solutions that are more diverse than those produced by single-representation approaches.
arXiv Detail & Related papers (2021-05-03T08:35:00Z) - A Particle Swarm Inspired Approach for Continuous Distributed Constraint
Optimization Problems [7.512486812178571]
Continuous DCOPs can explicitly model problems with continuous variables.
State-of-the-art approaches for solving C-DCOPs experience either onerous memory or computation overhead.
We propose a new C-DCOP algorithm, namely Particle Swarm Optimization Based C-DCOP (PCD), which is inspired by Particle Swarm Optimization (PSO)
arXiv Detail & Related papers (2020-10-20T11:04:47Z) - Hybrid DCOP Solvers: Boosting Performance of Local Search Algorithms [0.6853165736531939]
We propose a novel method for expediting both symmetric and asymmetric Distributed Constraint Optimization Problem (DCOP) solvers.
The core idea is based on initializing DCOP solvers with greedy fast non-iterative DCOP solvers.
We show that changing the starting conditions of existing DCOP solvers not only reduces the algorithm convergence time by up to 50%, but also reduces the communication overhead and leads to a better solution quality.
arXiv Detail & Related papers (2020-09-04T15:17:24Z) - A Multi-Agent Primal-Dual Strategy for Composite Optimization over
Distributed Features [52.856801164425086]
We study multi-agent sharing optimization problems with the objective function being the sum of smooth local functions plus a convex (possibly non-smooth) coupling function.
arXiv Detail & Related papers (2020-06-15T19:40:24Z) - C-CoCoA: A Continuous Cooperative Constraint Approximation Algorithm to
Solve Functional DCOPs [4.404507236193031]
This paper applies continuous non-linear optimization methods on Cooperative Constraint Approximation (CoCoA) algorithm.
Our algorithm is able to provide high-quality solutions at the expense of smaller communication cost and execution time.
arXiv Detail & Related papers (2020-02-27T20:44:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.