When Can Liquid Democracy Unveil the Truth?
- URL: http://arxiv.org/abs/2104.01828v1
- Date: Mon, 5 Apr 2021 09:45:14 GMT
- Title: When Can Liquid Democracy Unveil the Truth?
- Authors: Ruben Becker, Gianlorenzo D'Angelo, Esmaeil Delfaraz and Hugo Gilbert
- Abstract summary: We investigate the so-called ODP-problem that has been formulated by Caragiannis and Micha.
Under some hypothesis on either the accuracies of voters or the connectivity of the network, we obtain a-time $1/2$-approximation algorithm.
This observation proves formally that the connectivity of the social network is a key feature for the efficiency of the liquid democracy paradigm.
- Score: 12.198420431487731
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we investigate the so-called ODP-problem that has been
formulated by Caragiannis and Micha [10]. Here, we are in a setting with two
election alternatives out of which one is assumed to be correct. In ODP, the
goal is to organise the delegations in the social network in order to maximize
the probability that the correct alternative, referred to as ground truth, is
elected. While the problem is known to be computationally hard, we strengthen
existing hardness results by providing a novel strong approximation hardness
result: For any positive constant $C$, we prove that, unless $P=NP$, there is
no polynomial-time algorithm for ODP that achieves an approximation guarantee
of $\alpha \ge (\ln n)^{-C}$, where $n$ is the number of voters. The reduction
designed for this result uses poorly connected social networks in which some
voters suffer from misinformation. Interestingly, under some hypothesis on
either the accuracies of voters or the connectivity of the network, we obtain a
polynomial-time $1/2$-approximation algorithm. This observation proves formally
that the connectivity of the social network is a key feature for the efficiency
of the liquid democracy paradigm. Lastly, we run extensive simulations and
observe that simple algorithms (working either in a centralized or
decentralized way) outperform direct democracy on a large class of instances.
Overall, our contributions yield new insights on the question in which
situations liquid democracy can be beneficial.
Related papers
- Matching the Statistical Query Lower Bound for k-sparse Parity Problems with Stochastic Gradient Descent [83.85536329832722]
We show that gradient descent (SGD) can efficiently solve the $k$-parity problem on a $d$dimensional hypercube.
We then demonstrate how a trained neural network with SGD, solving the $k$-parity problem with small statistical errors.
arXiv Detail & Related papers (2024-04-18T17:57:53Z) - Elections in the Post-Quantum Era: Is the Complexity Shield Strong
Enough? [0.0]
We consider quantum computers to be a new threat to the complexity shield described above.
We provide an overview of possible attacks on election, discuss the abilities of quantum computing, and chart possible directions for future research in this area.
arXiv Detail & Related papers (2024-03-08T12:52:11Z) - When is Agnostic Reinforcement Learning Statistically Tractable? [76.1408672715773]
A new complexity measure, called the emphspanning capacity, depends solely on the set $Pi$ and is independent of the MDP dynamics.
We show there exists a policy class $Pi$ with a bounded spanning capacity that requires a superpolynomial number of samples to learn.
This reveals a surprising separation for learnability between generative access and online access models.
arXiv Detail & Related papers (2023-10-09T19:40:54Z) - Model-Free, Regret-Optimal Best Policy Identification in Online CMDPs [17.62509045102346]
This paper considers the best policy identification problem in online Constrained Markov Decision Processes (CMDPs)
We are interested in algorithms that are model-free, have low regret, and identify an approximately optimal policy with a high probability.
Existing model-free algorithms for online CMDPs with sublinear regret and constraint violation do not provide any convergence guarantee to an optimal policy.
arXiv Detail & Related papers (2023-09-27T04:33:09Z) - A Nearly-Linear Time Algorithm for Minimizing Risk of Conflict in Social
Networks [11.244268226976478]
We study the problem of minimizing risk of conflict in social networks by modifying the initial opinions of a small number of nodes.
In particular, the fast one scales to large networks with more than two million nodes, and achieves up to $20times$ speed-up.
arXiv Detail & Related papers (2023-01-13T10:32:12Z) - Near Sample-Optimal Reduction-based Policy Learning for Average Reward
MDP [58.13930707612128]
This work considers the sample complexity of obtaining an $varepsilon$-optimal policy in an average reward Markov Decision Process (AMDP)
We prove an upper bound of $widetilde O(H varepsilon-3 ln frac1delta)$ samples per state-action pair, where $H := sp(h*)$ is the span of bias of any optimal policy, $varepsilon$ is the accuracy and $delta$ is the failure probability.
arXiv Detail & Related papers (2022-12-01T15:57:58Z) - Communication-Efficient Adam-Type Algorithms for Distributed Data Mining [93.50424502011626]
We propose a class of novel distributed Adam-type algorithms (emphi.e., SketchedAMSGrad) utilizing sketching.
Our new algorithm achieves a fast convergence rate of $O(frac1sqrtnT + frac1(k/d)2 T)$ with the communication cost of $O(k log(d))$ at each iteration.
arXiv Detail & Related papers (2022-10-14T01:42:05Z) - Provably Efficient Offline Reinforcement Learning with Trajectory-Wise
Reward [66.81579829897392]
We propose a novel offline reinforcement learning algorithm called Pessimistic vAlue iteRaTion with rEward Decomposition (PARTED)
PARTED decomposes the trajectory return into per-step proxy rewards via least-squares-based reward redistribution, and then performs pessimistic value based on the learned proxy reward.
To the best of our knowledge, PARTED is the first offline RL algorithm that is provably efficient in general MDP with trajectory-wise reward.
arXiv Detail & Related papers (2022-06-13T19:11:22Z) - Fine-Grained Complexity and Algorithms for the Schulze Voting Method [9.399840807973545]
We study computational aspects of a well-known single-winner voting rule called the Schulze method.
Our paper is the first to bring fine-grained complexity into the field of computational social choice.
arXiv Detail & Related papers (2021-03-05T22:27:36Z) - Online Apprenticeship Learning [58.45089581278177]
In Apprenticeship Learning (AL), we are given a Markov Decision Process (MDP) without access to the cost function.
The goal is to find a policy that matches the expert's performance on some predefined set of cost functions.
We show that the OAL problem can be effectively solved by combining two mirror descent based no-regret algorithms.
arXiv Detail & Related papers (2021-02-13T12:57:51Z) - Provably Efficient Model-Free Algorithm for MDPs with Peak Constraints [38.2783003051101]
This paper considers the peak Constrained Markov Decision Process (PCMDP), where the agent chooses the policy to maximize total reward in the finite horizon as well as satisfy constraints at each epoch with probability 1.
We propose a model-free algorithm that converts PCMDP problem to an unconstrained problem and a Q-learning based approach is applied.
arXiv Detail & Related papers (2020-03-11T23:23:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.