How to Rationally Select Your Delegatee in PoS
- URL: http://arxiv.org/abs/2310.08895v1
- Date: Fri, 13 Oct 2023 06:58:29 GMT
- Title: How to Rationally Select Your Delegatee in PoS
- Authors: Yuzhe Zhang, Qin Wang, Shiping Chen, Chen Wang,
- Abstract summary: This paper centers around a simple yet crucial question for everyday users: How should one choose their delegated validators within proof-of-stake (PoS) protocols?
We propose a Bayesian model to quantify normal users' trust in delegatees, which we further incorporate into a game-theoretical model to simulate users' reactions.
Our results reveal that users tend to choose their delegatees and utilize their tokens by carefully weighing the delegation cost, the behaviors of other users, and the reputation of delegatees.
- Score: 7.541721598051209
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper centers around a simple yet crucial question for everyday users: How should one choose their delegated validators within proof-of-stake (PoS) protocols, particularly in the context of Ethereum 2.0? This has been a long-overlooked gap, as existing studies have primarily focused on inter-committee (validator set) behaviors and activities, while neglecting the dynamic formation of committees, especially for individual stakeholders seeking reliable validators. Our study bridges this gap by diving into the delegation process (normal users delegate their small-value tokens to delegatees who later act as validators) before entering an actual consensus phase. We propose a Bayesian model to quantify normal users' trust in delegatees, which we further incorporate into a game-theoretical model to simulate users' reactions against a set of critical factors identified through extensive research (including 10+ staking service provider as well as 30+ PoS blockchains). Our results reveal that users tend to choose their delegatees and utilize their tokens by carefully weighing the delegation cost, the behaviors of other users, and the reputation of delegatees, ultimately reaching a Nash equilibrium. Unfortunately, the collective trend significantly increases the likelihood of token concentration on a small number of delegatees.
Related papers
- GenCI: Generative Modeling of User Interest Shift via Cohort-based Intent Learning for CTR Prediction [84.0125708499372]
We propose a generative user intent framework to model user preferences for click-through rate (CTR) prediction.<n>The framework first employs a generative model, trained with a next-item prediction objective, to proactively produce candidate interest cohorts.<n>A hierarchical candidate-aware network then injects this rich contextual signal into the ranking stage, refining them with cross-attention to align with both user history and the target item.
arXiv Detail & Related papers (2026-01-26T08:15:04Z) - When to Ensemble: Identifying Token-Level Points for Stable and Fast LLM Ensembling [41.54273937469359]
We show that using existing ensemble methods in long-form generation requires a careful choice of ensembling positions.<n>We propose SAFE, (Stable And Fast LLM Ensembling), a framework that selectively ensembles by jointly considering these factors.<n>Our experiments on diverse benchmarks, including MATH500 and BBH, demonstrate that SAFE outperforms existing methods in both accuracy and efficiency.
arXiv Detail & Related papers (2025-10-17T06:18:29Z) - LaSeR: Reinforcement Learning with Last-Token Self-Rewarding [54.72617309922891]
Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a core paradigm for enhancing the reasoning capabilities of Large Language Models (LLMs)<n>Previous practice requires the LLM to sequentially generate solutions and self-verifications using two separate prompt templates, which significantly reduces efficiency.<n>We propose LaSeR (Reinforcement Learning with Last-Token Self-Rewarding), an algorithm that simply augments the original RLVR loss with a MSE loss.
arXiv Detail & Related papers (2025-10-16T17:55:11Z) - Fairness in Token Delegation: Mitigating Voting Power Concentration in DAOs [0.0]
DAOs aim to enable participatory governance, but in practice face challenges of voter apathy, concentration of voting power, and misaligned delegation.<n>Existing delegation mechanisms often reinforce biases, where a small set of highly ranked delegates accumulate disproportionate influence regardless of their alignment with the broader community.<n>We conduct an empirical study of delegation in governance, combining on-chain data from five major protocols with off-chain discussions from 14 forums.
arXiv Detail & Related papers (2025-10-07T11:53:40Z) - An Incentive-Compatible Reward Sharing Mechanism for Mitigating Mirroring Attacks in Decentralized Data-Feed Systems [9.565675024370941]
We study the impact of mirroring attacks on the reliability and dependability of voting-based data-feed systems.<n>We propose a new incentive mechanism that discourages Sybil behavior.
arXiv Detail & Related papers (2025-09-14T14:33:17Z) - Reward-Driven Interaction: Enhancing Proactive Dialogue Agents through User Satisfaction Prediction [22.105598216923706]
We propose two auxiliary tasks to improve the representation learning of user utterances and sessions that enhance user satisfaction prediction.<n>The proposed method is evaluated on DuerOS, demonstrating significant improvements in the accuracy of error recognition on rare user utterances and long-tailed domains.
arXiv Detail & Related papers (2025-05-24T15:01:30Z) - Decentralization in PoS Blockchain Consensus: Quantification and Advancement [10.679753825744964]
This study quantifies the decentralization of consensus mechanisms in proof-of-stake (PoS) blockchains.
We introduce two alternative weighting models for PoS consensus: Square Root Stake Weight (SRSW) and Logarithmic Stake Weight (LSW)
Results demonstrate that SRSW and LSW models improve decentralization metrics by an average of 51% and 132%, respectively.
arXiv Detail & Related papers (2025-04-19T16:33:30Z) - TrustChain: A Blockchain Framework for Auditing and Verifying Aggregators in Decentralized Federated Learning [6.144680854063938]
This paper proposes a DFL structure, called TrustChain, that scores the aggregators before selection based on their past behavior and audits them after the aggregation.
The proposed method relies on several principles, including blockchain, anomaly detection, and concept drift analysis.
arXiv Detail & Related papers (2025-02-23T02:26:17Z) - Fuzzychain: An Equitable Consensus Mechanism for Blockchain Networks [12.433289572707212]
Fuzzychain is a proposed solution to the drawbacks of Proof of Stake (PoS)
It introduces the use of fuzzy sets to define stake semantics, promoting decentralised and distributed processing control.
Our results indicate that Fuzzychain not only matches PoS in functionality but also ensures a fairer distribution of stakes among validators.
arXiv Detail & Related papers (2024-04-20T10:01:40Z) - Decentralized Blockchain-based Robust Multi-agent Multi-armed Bandit [12.547006167704398]
We study a robust, i.e. in presence of malicious participants, multi-agent multi-armed bandit problem where multiple participants are distributed on a fully decentralized blockchain.
We are the first to incorporate advanced techniques from blockchains into a cooperative decision making framework to design optimal strategies for honest participants.
Notably, we are the first to prove the theoretical regret of the proposed algorithm and claim its optimality.
arXiv Detail & Related papers (2024-02-06T21:33:34Z) - Analyzing Geospatial Distribution in Blockchains [15.432313954857106]
We analyze blockchain decentralization's often-overlooked but quantifiable dimension: geospatial distribution of transaction processing.
Minority validators tend not to meet the performance requirements, often misidentified as crash failures.
We develop a solution that easily integrates with consensus protocols.
arXiv Detail & Related papers (2023-05-28T16:35:01Z) - A Blockchain-based Platform for Reliable Inference and Training of
Large-Scale Models [1.323497585762675]
We introduce BRAIN, a novel platform specifically designed to ensure reliable inference and training of large models.
BRAIN harnesses a unique two-phase transaction mechanism, allowing real-time processing via pipelining.
BRAIN delivers considerably higher inference throughput at reasonable gas fees.
arXiv Detail & Related papers (2023-05-06T14:21:41Z) - Learning Classifiers of Prototypes and Reciprocal Points for Universal
Domain Adaptation [79.62038105814658]
Universal Domain aims to transfer the knowledge between datasets by handling two shifts: domain-shift and categoryshift.
Main challenge is correctly distinguishing the unknown target samples while adapting the distribution of known class knowledge from source to target.
Most existing methods approach this problem by first training the target adapted known and then relying on the single threshold to distinguish unknown target samples.
arXiv Detail & Related papers (2022-12-16T09:01:57Z) - The Minority Matters: A Diversity-Promoting Collaborative Metric
Learning Algorithm [154.47590401735323]
Collaborative Metric Learning (CML) has recently emerged as a popular method in recommendation systems.
This paper focuses on a challenging scenario where a user has multiple categories of interests.
We propose a novel method called textitDiversity-Promoting Collaborative Metric Learning (DPCML)
arXiv Detail & Related papers (2022-09-30T08:02:18Z) - Spatio-Temporal Graph Representation Learning for Fraudster Group
Detection [50.779498955162644]
Companies may hire fraudster groups to write fake reviews to either demote competitors or promote their own businesses.
To detect such groups, a common model is to represent fraudster groups' static networks.
We propose to first capitalize on the effectiveness of the HIN-RNN in both reviewers' representation learning.
arXiv Detail & Related papers (2022-01-07T08:01:38Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Learning to Separate Clusters of Adversarial Representations for Robust
Adversarial Detection [50.03939695025513]
We propose a new probabilistic adversarial detector motivated by a recently introduced non-robust feature.
In this paper, we consider the non-robust features as a common property of adversarial examples, and we deduce it is possible to find a cluster in representation space corresponding to the property.
This idea leads us to probability estimate distribution of adversarial representations in a separate cluster, and leverage the distribution for a likelihood based adversarial detector.
arXiv Detail & Related papers (2020-12-07T07:21:18Z) - Maximizing Information Gain in Partially Observable Environments via
Prediction Reward [64.24528565312463]
This paper tackles the challenge of using belief-based rewards for a deep RL agent.
We derive the exact error between negative entropy and the expected prediction reward.
This insight provides theoretical motivation for several fields using prediction rewards.
arXiv Detail & Related papers (2020-05-11T08:13:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.