Learning Optimal Temperature Region for Solving Mixed Integer Functional
DCOPs
- URL: http://arxiv.org/abs/2002.12001v2
- Date: Wed, 2 Sep 2020 06:17:37 GMT
- Title: Learning Optimal Temperature Region for Solving Mixed Integer Functional
DCOPs
- Authors: Saaduddin Mahmud, Md. Mosaddek Khan, Moumita Choudhury, Long
Tran-Thanh and Nicholas R. Jennings
- Abstract summary: We combine Distributed Constraint Optimization Problems (DCOPs) with Functional DCOPs (F-DCOPs)
We then propose a novel algorithm $-$ Distributed Parallel Simulated Annealing (DPSA)
We show that DPSA produces solutions of significantly better quality than the state-of-the-art non-exact algorithms in their corresponding settings.
- Score: 26.16778095954815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distributed Constraint Optimization Problems (DCOPs) are an important
framework for modeling coordinated decision-making problems in multi-agent
systems with a set of discrete variables. Later works have extended DCOPs to
model problems with a set of continuous variables, named Functional DCOPs
(F-DCOPs). In this paper, we combine both of these frameworks into the Mixed
Integer Functional DCOP (MIF-DCOP) framework that can deal with problems
regardless of their variables' type. We then propose a novel algorithm $-$
Distributed Parallel Simulated Annealing (DPSA), where agents cooperatively
learn the optimal parameter configuration for the algorithm while also solving
the given problem using the learned knowledge. Finally, we empirically evaluate
our approach in DCOP, F-DCOP, and MIF-DCOP settings and show that DPSA produces
solutions of significantly better quality than the state-of-the-art non-exact
algorithms in their corresponding settings.
Related papers
- Explainable Distributed Constraint Optimization Problems [5.172964916120901]
We propose the Explainable DCOP model, which extends a DCOP to include its solution and a contrastive query for that solution.
We show that our approach can scale to large problems, and the different variants provide different options for trading off explanation lengths for smaller runtimes.
arXiv Detail & Related papers (2025-02-19T21:06:30Z) - Near-Optimal Online Learning for Multi-Agent Submodular Coordination: Tight Approximation and Communication Efficiency [52.60557300927007]
We present a $textbfMA-OSMA$ algorithm to transfer the discrete submodular problem into a continuous optimization.
We also introduce a projection-free $textbfMA-OSEA$ algorithm, which effectively utilizes the KL divergence by mixing a uniform distribution.
Our algorithms significantly improve the $(frac11+c)$-approximation provided by the state-of-the-art OSG algorithm.
arXiv Detail & Related papers (2025-02-07T15:57:56Z) - Differentially Private Random Block Coordinate Descent [51.62669821275571]
We propose a differentially private random coordinate descent method that selects multiple coordinates with varying probabilities in each iteration using sketch matrices.
Our algorithm generalizes both DP-CD and the classical DP-SGD (Differentially Private Descent), while preserving the same utility guarantees.
arXiv Detail & Related papers (2024-12-22T15:06:56Z) - Sample-Efficient Multi-Agent RL: An Optimization Perspective [103.35353196535544]
We study multi-agent reinforcement learning (MARL) for the general-sum Markov Games (MGs) under the general function approximation.
We introduce a novel complexity measure called the Multi-Agent Decoupling Coefficient (MADC) for general-sum MGs.
We show that our algorithm provides comparable sublinear regret to the existing works.
arXiv Detail & Related papers (2023-10-10T01:39:04Z) - Neural Stochastic Dual Dynamic Programming [99.80617899593526]
We introduce a trainable neural model that learns to map problem instances to a piece-wise linear value function.
$nu$-SDDP can significantly reduce problem solving cost without sacrificing solution quality.
arXiv Detail & Related papers (2021-12-01T22:55:23Z) - Two-Stage Stochastic Optimization via Primal-Dual Decomposition and Deep
Unrolling [86.85697555068168]
Two-stage algorithmic optimization plays a critical role in various engineering and scientific applications.
There still lack efficient algorithms, especially when the long-term and short-term variables are coupled in the constraints.
We show that PDD-SSCA can achieve superior performance over existing solutions.
arXiv Detail & Related papers (2021-05-05T03:36:00Z) - Ensemble Feature Extraction for Multi-Container Quality-Diversity
Algorithms [0.2741266294612775]
Quality-Diversity algorithms search for large collections of diverse and high-performing solutions.
We describe MC-AURORA, a Quality-Diversity approach that optimises simultaneously several collections of solutions.
We show that this approach produces solutions that are more diverse than those produced by single-representation approaches.
arXiv Detail & Related papers (2021-05-03T08:35:00Z) - A Particle Swarm Inspired Approach for Continuous Distributed Constraint
Optimization Problems [7.512486812178571]
Continuous DCOPs can explicitly model problems with continuous variables.
State-of-the-art approaches for solving C-DCOPs experience either onerous memory or computation overhead.
We propose a new C-DCOP algorithm, namely Particle Swarm Optimization Based C-DCOP (PCD), which is inspired by Particle Swarm Optimization (PSO)
arXiv Detail & Related papers (2020-10-20T11:04:47Z) - Hybrid DCOP Solvers: Boosting Performance of Local Search Algorithms [0.6853165736531939]
We propose a novel method for expediting both symmetric and asymmetric Distributed Constraint Optimization Problem (DCOP) solvers.
The core idea is based on initializing DCOP solvers with greedy fast non-iterative DCOP solvers.
We show that changing the starting conditions of existing DCOP solvers not only reduces the algorithm convergence time by up to 50%, but also reduces the communication overhead and leads to a better solution quality.
arXiv Detail & Related papers (2020-09-04T15:17:24Z) - A Multi-Agent Primal-Dual Strategy for Composite Optimization over
Distributed Features [52.856801164425086]
We study multi-agent sharing optimization problems with the objective function being the sum of smooth local functions plus a convex (possibly non-smooth) coupling function.
arXiv Detail & Related papers (2020-06-15T19:40:24Z) - C-CoCoA: A Continuous Cooperative Constraint Approximation Algorithm to
Solve Functional DCOPs [4.404507236193031]
This paper applies continuous non-linear optimization methods on Cooperative Constraint Approximation (CoCoA) algorithm.
Our algorithm is able to provide high-quality solutions at the expense of smaller communication cost and execution time.
arXiv Detail & Related papers (2020-02-27T20:44:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.