Computing Optimal Manipulations in Cryptographic Self-Selection Proof-of-Stake Protocols
- URL: http://arxiv.org/abs/2406.15282v1
- Date: Fri, 21 Jun 2024 16:20:39 GMT
- Title: Computing Optimal Manipulations in Cryptographic Self-Selection Proof-of-Stake Protocols
- Authors: Matheus V. X. Ferreira, Aadityan Ganesh, Jack Hourigan, Hannah Huh, S. Matthew Weinberg, Catherine Yu,
- Abstract summary: We develop methods to provably nail $f(alpha,beta)$ for any desired $(alpha,beta)$ up to arbitrary precision.
One technical challenge is that natural sampling-based estimates of the mean of our target distribution are emphnot unbiased estimators.
- Score: 3.3139913966251227
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cryptographic Self-Selection is a paradigm employed by modern Proof-of-Stake consensus protocols to select a block-proposing "leader." Algorand [Chen and Micali, 2019] proposes a canonical protocol, and Ferreira et al. [2022] establish bounds $f(\alpha,\beta)$ on the maximum fraction of rounds a strategic player can lead as a function of their stake $\alpha$ and a network connectivity parameter $\beta$. While both their lower and upper bounds are non-trivial, there is a substantial gap between them (for example, they establish $f(10\%,1) \in [10.08\%, 21.12\%]$), leaving open the question of how significant of a concern these manipulations are. We develop computational methods to provably nail $f(\alpha,\beta)$ for any desired $(\alpha,\beta)$ up to arbitrary precision, and implement our method on a wide range of parameters (for example, we confirm $f(10\%,1) \in [10.08\%, 10.15\%]$). Methodologically, estimating $f(\alpha,\beta)$ can be phrased as estimating to high precision the value of a Markov Decision Process whose states are countably-long lists of real numbers. Our methodological contributions involve (a) reformulating the question instead as computing to high precision the expected value of a distribution that is a fixed-point of a non-linear sampling operator, and (b) provably bounding the error induced by various truncations and sampling estimations of this distribution (which appears intractable to solve in closed form). One technical challenge, for example, is that natural sampling-based estimates of the mean of our target distribution are \emph{not} unbiased estimators, and therefore our methods necessarily go beyond claiming sufficiently-many samples to be close to the mean.
Related papers
- Active Learning for Level Set Estimation Using Randomized Straddle Algorithms [18.96269063427081]
We propose a novel method to identify the set of input points where a function takes value above (or below) a given threshold.
The confidence parameter in the proposed method has the advantages of not needing adjustment, not depending on the number of iterations and candidate points, and not being conservative.
arXiv Detail & Related papers (2024-08-06T12:39:12Z) - Near Sample-Optimal Reduction-based Policy Learning for Average Reward
MDP [58.13930707612128]
This work considers the sample complexity of obtaining an $varepsilon$-optimal policy in an average reward Markov Decision Process (AMDP)
We prove an upper bound of $widetilde O(H varepsilon-3 ln frac1delta)$ samples per state-action pair, where $H := sp(h*)$ is the span of bias of any optimal policy, $varepsilon$ is the accuracy and $delta$ is the failure probability.
arXiv Detail & Related papers (2022-12-01T15:57:58Z) - Truncation Sampling as Language Model Desmoothing [115.28983143361681]
Long samples of text from neural language models can be of poor quality.
Truncation sampling algorithms set some words' probabilities to zero at each step.
We introduce $eta$-sampling, which truncates words below an entropy-dependent probability threshold.
arXiv Detail & Related papers (2022-10-27T05:52:35Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - TURF: A Two-factor, Universal, Robust, Fast Distribution Learning
Algorithm [64.13217062232874]
One of its most powerful and successful modalities approximates every distribution to an $ell$ distance essentially at most a constant times larger than its closest $t$-piece degree-$d_$.
We provide a method that estimates this number near-optimally, hence helps approach the best possible approximation.
arXiv Detail & Related papers (2022-02-15T03:49:28Z) - Computation of conditional expectations with guarantees [1.4213973379473654]
A conditional expectation of a square-integrable random variable $Y$ given a $d$-dimensional random vector $X$ can be obtained by minimizing the mean squared distance between $Y$ and $f(X)$ over all Borel functions $f colon mathbbRd to mathbbR$.
In this paper, we derive an expected value representation of the minimal mean square distance which in many applications can efficiently be approximated with a standard Monte Carlo average.
arXiv Detail & Related papers (2021-12-03T09:26:28Z) - PAC Mode Estimation using PPR Martingale Confidence Sequences [5.623190096715942]
We consider the problem of correctly identifying the mode of a discrete distribution $mathcalP$ with sufficiently high probability.
We propose a generalisation to mode estimation, in which $mathcalP$ may take $K geq 2$ values.
Our resulting stopping rule, denoted PPR-ME, is optimal in its sample complexity up to a logarithmic factor.
arXiv Detail & Related papers (2021-09-10T18:11:38Z) - Clustering Mixture Models in Almost-Linear Time via List-Decodable Mean
Estimation [58.24280149662003]
We study the problem of list-decodable mean estimation, where an adversary can corrupt a majority of the dataset.
We develop new algorithms for list-decodable mean estimation, achieving nearly-optimal statistical guarantees.
arXiv Detail & Related papers (2021-06-16T03:34:14Z) - Active learning for distributionally robust level-set estimation [16.869069414080847]
We show that a black-box function $f$ with high evaluation cost depends on two types of variables $bm x$ and $bm w$.
A natural measure of robustness is the probability that $f(bm x, bm w)$ exceeds a given threshold $h$.
We propose a theoretically grounded and computationally efficient active learning method for this problem.
arXiv Detail & Related papers (2021-02-08T04:43:31Z) - List-Decodable Mean Estimation in Nearly-PCA Time [50.79691056481693]
We study the fundamental task of list-decodable mean estimation in high dimensions.
Our algorithm runs in time $widetildeO(ndk)$ for all $k = O(sqrtd) cup Omega(d)$, where $n$ is the size of the dataset.
A variant of our algorithm has runtime $widetildeO(ndk)$ for all $k$, at the expense of an $O(sqrtlog k)$ factor in the recovery guarantee
arXiv Detail & Related papers (2020-11-19T17:21:37Z) - List-Decodable Mean Estimation via Iterative Multi-Filtering [44.805549762166926]
We are given a set $T$ of points in $mathbbRd$ with the promise that an unknown $alpha$-fraction of points in $T$ are drawn from an unknown mean and bounded covariance distribution $D$.
We output a small list of hypothesis vectors such that at least one of them is close to the mean of $D$.
In more detail, our algorithm is sample and computationally efficient, and achieves information-theoretically near-optimal error.
arXiv Detail & Related papers (2020-06-18T17:47:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.