Unraveling Responsiveness of Chained BFT Consensus with Network Delay
- URL: http://arxiv.org/abs/2501.03695v1
- Date: Tue, 07 Jan 2025 10:50:15 GMT
- Title: Unraveling Responsiveness of Chained BFT Consensus with Network Delay
- Authors: Yining Tang, Qihang Luo, Runchao Han, Jianyu Niu, Chen Feng, Yinqian Zhang,
- Abstract summary: Chained Byzantine Fault Tolerant (BFT) protocols have been increasingly adopted in practical systems.
We introduce a unified framework utilizing Markov Decision Processes (MDP) to model and assess the performance of three prominent chained BFT protocols.
- Score: 22.287511037444265
- License:
- Abstract: With the advancement of blockchain technology, chained Byzantine Fault Tolerant (BFT) protocols have been increasingly adopted in practical systems, making their performance a crucial aspect of the study. In this paper, we introduce a unified framework utilizing Markov Decision Processes (MDP) to model and assess the performance of three prominent chained BFT protocols. Our framework effectively captures complex adversarial behaviors, focusing on two key performance metrics: chain growth and commitment rate. We implement the optimal attack strategies obtained from MDP analysis on an existing evaluation platform for chained BFT protocols and conduct extensive experiments under various settings to validate our theoretical results. Through rigorous theoretical analysis and thorough practical experiments, we provide an in-depth evaluation of chained BFT protocols under diverse attack scenarios, uncovering optimal attack strategies. Contrary to conventional belief, our findings reveal that while responsiveness can enhance performance, it is not universally beneficial across all scenarios. This work not only deepens our understanding of chained BFT protocols, but also offers valuable insights and analytical tools that can inform the design of more robust and efficient protocols.
Related papers
- Defending Against Diverse Attacks in Federated Learning Through Consensus-Based Bi-Level Optimization [6.484902940268382]
Adversarial attacks pose significant challenges in many machine learning applications.
We conduct a theoretical analysis of the resilience of consensus-based bi-level optimization (CB$2$O) in adversarial settings.
We propose FedCB$2$O, a novel interacting multi-particle system, and design a practical algorithm that addresses the demands of real-world applications.
arXiv Detail & Related papers (2024-12-03T16:26:56Z) - Unlocking the Capabilities of Thought: A Reasoning Boundary Framework to Quantify and Optimize Chain-of-Thought [61.588465852846646]
Chain-of-Thought (CoT) reasoning has emerged as a promising approach for enhancing the performance of large language models (LLMs)
In this work, we introduce a novel reasoning boundary framework (RBF) to address these challenges.
arXiv Detail & Related papers (2024-10-08T05:26:28Z) - Optimal Protocols for Continual Learning via Statistical Physics and
Control Theory [7.519872646378836]
Artificial neural networks often struggle with catastrophic forgetting when learning multiple tasks sequentially.
Recent theoretical work has addressed this issue by analysing learning curves in synthetic frameworks under training protocols.
We fill this gap combining exact equations for training dynamics, derived using statistical physics techniques, with optimal control methods.
Our theoretical analysis offers non-trivial yet interpretable strategies for mitigating catastrophic forgetting.
arXiv Detail & Related papers (2024-09-26T17:01:41Z) - Simple Perturbations Subvert Ethereum Phishing Transactions Detection: An Empirical Analysis [12.607077453567594]
We investigate the impact of various adversarial attack strategies on model performance metrics, such as accuracy, precision, recall, and F1-score.
We examine the effectiveness of different mitigation strategies, including adversarial training and enhanced feature selection, in enhancing model robustness.
arXiv Detail & Related papers (2024-08-06T20:40:20Z) - Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - Provable Reward-Agnostic Preference-Based Reinforcement Learning [61.39541986848391]
Preference-based Reinforcement Learning (PbRL) is a paradigm in which an RL agent learns to optimize a task using pair-wise preference-based feedback over trajectories.
We propose a theoretical reward-agnostic PbRL framework where exploratory trajectories that enable accurate learning of hidden reward functions are acquired.
arXiv Detail & Related papers (2023-05-29T15:00:09Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Building Robust Ensembles via Margin Boosting [98.56381714748096]
In adversarial robustness, a single model does not usually have enough power to defend against all possible adversarial attacks.
We develop an algorithm for learning an ensemble with maximum margin.
We show that our algorithm not only outperforms existing ensembling techniques, but also large models trained in an end-to-end fashion.
arXiv Detail & Related papers (2022-06-07T14:55:58Z) - LiBRe: A Practical Bayesian Approach to Adversarial Detection [36.541671795530625]
LiBRe can endow a variety of pre-trained task-dependent DNNs with the ability of defending heterogeneous adversarial attacks at a low cost.
We build the few-layer deep ensemble variational and adopt the pre-training & fine-tuning workflow to boost the effectiveness and efficiency of LiBRe.
arXiv Detail & Related papers (2021-03-27T07:48:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.