Explainable Distributed Constraint Optimization Problems
- URL: http://arxiv.org/abs/2502.14102v1
- Date: Wed, 19 Feb 2025 21:06:30 GMT
- Title: Explainable Distributed Constraint Optimization Problems
- Authors: Ben Rachmut, Stylianos Loukas Vasileiou, Nimrod Meir Weinstein, Roie Zivan, William Yeoh,
- Abstract summary: We propose the Explainable DCOP model, which extends a DCOP to include its solution and a contrastive query for that solution.<n>We show that our approach can scale to large problems, and the different variants provide different options for trading off explanation lengths for smaller runtimes.
- Score: 5.172964916120901
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Distributed Constraint Optimization Problem (DCOP) formulation is a powerful tool to model cooperative multi-agent problems that need to be solved distributively. A core assumption of existing approaches is that DCOP solutions can be easily understood, accepted, and adopted, which may not hold, as evidenced by the large body of literature on Explainable AI. In this paper, we propose the Explainable DCOP (X-DCOP) model, which extends a DCOP to include its solution and a contrastive query for that solution. We formally define some key properties that contrastive explanations must satisfy for them to be considered as valid solutions to X-DCOPs as well as theoretical results on the existence of such valid explanations. To solve X-DCOPs, we propose a distributed framework as well as several optimizations and suboptimal variants to find valid explanations. We also include a human user study that showed that users, not surprisingly, prefer shorter explanations over longer ones. Our empirical evaluations showed that our approach can scale to large problems, and the different variants provide different options for trading off explanation lengths for smaller runtimes. Thus, our model and algorithmic contributions extend the state of the art by reducing the barrier for users to understand DCOP solutions, facilitating their adoption in more real-world applications.
Related papers
- Domain Adaptation and Entanglement: an Optimal Transport Perspective [86.24617989187988]
Current machine learning systems are brittle in the face of distribution shifts (DS), where the target distribution that the system is tested on differs from the source distribution used to train the system.
For deep neural networks, a popular framework for unsupervised domain adaptation (UDA) is domain matching, in which algorithms try to align the marginal distributions in the feature or output space.
In this paper, we derive new bounds based on optimal transport that analyze the UDA problem.
arXiv Detail & Related papers (2025-03-11T08:10:03Z) - Diffusion Models as Network Optimizers: Explorations and Analysis [71.69869025878856]
generative diffusion models (GDMs) have emerged as a promising new approach to network optimization.<n>In this study, we first explore the intrinsic characteristics of generative models.<n>We provide a concise theoretical and intuitive demonstration of the advantages of generative models over discriminative network optimization.
arXiv Detail & Related papers (2024-11-01T09:05:47Z) - DiffSG: A Generative Solver for Network Optimization with Diffusion Model [75.27274046562806]
Generative diffusion models are popular in various cross-domain applications.
These models hold promise in tackling complex network optimization problems.
We propose a new framework for generative diffusion models called Diffusion Model-based Solution Generation.
arXiv Detail & Related papers (2024-08-13T07:56:21Z) - Mining Potentially Explanatory Patterns via Partial Solutions [39.58317527488534]
We present an algorithm that assembles a collection of Partial Solutions chosen to strike a balance between high fitness, simplicity and atomicity.
Experiments with standard benchmarks show that the proposed algorithm is able to find Partial Solutions which improve explainability at reasonable computational cost without affecting search performance.
arXiv Detail & Related papers (2024-04-05T20:12:02Z) - Efficient Alternating Minimization Solvers for Wyner Multi-View
Unsupervised Learning [0.0]
We propose two novel formulations that enable the development of computational efficient solvers based the alternating principle.
The proposed solvers offer computational efficiency, theoretical convergence guarantees, local minima complexity with the number of views, and exceptional accuracy as compared with the state-of-the-art techniques.
arXiv Detail & Related papers (2023-03-28T10:17:51Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - A Particle Swarm Inspired Approach for Continuous Distributed Constraint
Optimization Problems [7.512486812178571]
Continuous DCOPs can explicitly model problems with continuous variables.
State-of-the-art approaches for solving C-DCOPs experience either onerous memory or computation overhead.
We propose a new C-DCOP algorithm, namely Particle Swarm Optimization Based C-DCOP (PCD), which is inspired by Particle Swarm Optimization (PSO)
arXiv Detail & Related papers (2020-10-20T11:04:47Z) - Learning What to Defer for Maximum Independent Sets [84.00112106334655]
We propose a novel DRL scheme, coined learning what to defer (LwD), where the agent adaptively shrinks or stretch the number of stages by learning to distribute the element-wise decisions of the solution at each stage.
We apply the proposed framework to the maximum independent set (MIS) problem, and demonstrate its significant improvement over the current state-of-the-art DRL scheme.
arXiv Detail & Related papers (2020-06-17T02:19:31Z) - C-CoCoA: A Continuous Cooperative Constraint Approximation Algorithm to
Solve Functional DCOPs [4.404507236193031]
This paper applies continuous non-linear optimization methods on Cooperative Constraint Approximation (CoCoA) algorithm.
Our algorithm is able to provide high-quality solutions at the expense of smaller communication cost and execution time.
arXiv Detail & Related papers (2020-02-27T20:44:25Z) - Learning Optimal Temperature Region for Solving Mixed Integer Functional
DCOPs [26.16778095954815]
We combine Distributed Constraint Optimization Problems (DCOPs) with Functional DCOPs (F-DCOPs)
We then propose a novel algorithm $-$ Distributed Parallel Simulated Annealing (DPSA)
We show that DPSA produces solutions of significantly better quality than the state-of-the-art non-exact algorithms in their corresponding settings.
arXiv Detail & Related papers (2020-02-27T09:46:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.