Approximation Algorithms for Preference Aggregation Using CP-Nets
- URL: http://arxiv.org/abs/2312.09162v2
- Date: Fri, 15 Dec 2023 15:06:57 GMT
- Title: Approximation Algorithms for Preference Aggregation Using CP-Nets
- Authors: Abu Mohammmad Hammad Ali, Boting Yang, Sandra Zilles
- Abstract summary: This paper studies the design and analysis of approximation algorithms for aggregating preferences over Conditional Preference Networks (CP-nets)
Its focus is on aggregating preferences over so-called emphswaps, for which optimal solutions in general are already known to be of exponential size.
- Score: 3.337244446073836
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper studies the design and analysis of approximation algorithms for
aggregating preferences over combinatorial domains, represented using
Conditional Preference Networks (CP-nets). Its focus is on aggregating
preferences over so-called \emph{swaps}, for which optimal solutions in general
are already known to be of exponential size. We first analyze a trivial
2-approximation algorithm that simply outputs the best of the given input
preferences, and establish a structural condition under which the approximation
ratio of this algorithm is improved to $4/3$. We then propose a polynomial-time
approximation algorithm whose outputs are provably no worse than those of the
trivial algorithm, but often substantially better. A family of problem
instances is presented for which our improved algorithm produces optimal
solutions, while, for any $\varepsilon$, the trivial algorithm can\emph{not}\/
attain a $(2-\varepsilon)$-approximation. These results may lead to the first
polynomial-time approximation algorithm that solves the CP-net aggregation
problem for swaps with an approximation ratio substantially better than $2$.
Related papers
- Individualized Privacy Accounting via Subsampling with Applications in Combinatorial Optimization [55.81991984375959]
In this work, we give a new technique for analyzing individualized privacy accounting via the following simple observation.
We obtain several improved algorithms for private optimization problems, including decomposable submodular and set algorithm cover.
arXiv Detail & Related papers (2024-05-28T19:02:30Z) - Linear Query Approximation Algorithms for Non-monotone Submodular
Maximization under Knapsack Constraint [16.02833173359407]
This work introduces two constant factor approximation algorithms with linear query complexity for non-monotone submodular over a ground set of size $n$ subject to a knapsack constraint.
$mathsfDLA$ is a deterministic algorithm that provides an approximation factor of $6+epsilon$ while $mathsfRLA$ is a randomized algorithm with an approximation factor of $4+epsilon$.
arXiv Detail & Related papers (2023-05-17T15:27:33Z) - A One-Sample Decentralized Proximal Algorithm for Non-Convex Stochastic
Composite Optimization [10.762749887051546]
We propose two-time scale algorithms: ProxDAS-A and Proxcal$DASA-GT.
Unlike prior work, our algorithms achieve comparable complexity without requiring large batch sizes, more complex per-it operations, or stronger assumptions.
arXiv Detail & Related papers (2023-02-20T05:16:18Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Simultaenous Sieves: A Deterministic Streaming Algorithm for
Non-Monotone Submodular Maximization [16.346069386394703]
We present a deterministic, single-pass streaming algorithm for the problem of maximizing a submodular function, not necessarily a monotone, with respect to a cardinality constraint.
For a monotone, single-pass streaming algorithm, our algorithm achieves in time an improvement of the best approximation factor from $1/9$ of previous literature to $approx 0.2689$.
arXiv Detail & Related papers (2020-10-27T15:22:49Z) - Revisiting Modified Greedy Algorithm for Monotone Submodular
Maximization with a Knapsack Constraint [75.85952446237599]
We show that a modified greedy algorithm can achieve an approximation factor of $0.305$.
We derive a data-dependent upper bound on the optimum.
It can also be used to significantly improve the efficiency of such algorithms as branch and bound.
arXiv Detail & Related papers (2020-08-12T15:40:21Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z) - Second-order Conditional Gradient Sliding [79.66739383117232]
We present the emphSecond-Order Conditional Gradient Sliding (SOCGS) algorithm.
The SOCGS algorithm converges quadratically in primal gap after a finite number of linearly convergent iterations.
It is useful when the feasible region can only be accessed efficiently through a linear optimization oracle.
arXiv Detail & Related papers (2020-02-20T17:52:18Z) - Learning Sparse Classifiers: Continuous and Mixed Integer Optimization
Perspectives [10.291482850329892]
Mixed integer programming (MIP) can be used to solve (to optimality) $ell_0$-regularized regression problems.
We propose two classes of scalable algorithms: an exact algorithm that can handlepapprox 50,000$ features in a few minutes, and approximate algorithms that can address instances with $papprox6$.
In addition, we present new estimation error bounds for $ell$-regularizeds.
arXiv Detail & Related papers (2020-01-17T18:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.