EByFTVeS: Efficient Byzantine Fault Tolerant-based Verifiable Secret-sharing in Distributed Privacy-preserving Machine Learning
- URL: http://arxiv.org/abs/2509.12899v1
- Date: Tue, 16 Sep 2025 09:54:27 GMT
- Title: EByFTVeS: Efficient Byzantine Fault Tolerant-based Verifiable Secret-sharing in Distributed Privacy-preserving Machine Learning
- Authors: Zhen Li, Zijian Zhang, Wenjin Yang, Pengbo Wang, Zhaoqi Wang, Meng Li, Yan Wu, Xuyang Liu, Jing Sun, Liehuang Zhu,
- Abstract summary: Verifiable Secret Sharing (VSS) has been widespread in Distributed Privacy-preserving Machine Learning (DPML)<n>We explore an Adaptive Share Delay Provision (ASDP) strategy, and launch an ASDP-based Customized Model Poisoning Attack (ACuMPA)<n>We propose an [E]fficient [By]zantine [F]ault [T]olerant-based [S]ecret-sharing (EByFTVeS) scheme.
- Score: 40.83579204799341
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Verifiable Secret Sharing (VSS) has been widespread in Distributed Privacy-preserving Machine Learning (DPML), because invalid shares from malicious dealers or participants can be recognized by verifying the commitment of the received shares for honest participants. However, the consistency and the computation and communitation burden of the VSS-based DPML schemes are still two serious challenges. Although Byzantine Fault Tolerance (BFT) system has been brought to guarantee the consistency and improve the efficiency of the existing VSS-based DPML schemes recently, we explore an Adaptive Share Delay Provision (ASDP) strategy, and launch an ASDP-based Customized Model Poisoning Attack (ACuMPA) for certain participants in this paper. We theoretically analyzed why the ASDP strategy and the ACuMPA algorithm works to the existing schemes. Next, we propose an [E]fficient [By]zantine [F]ault [T]olerant-based [Ve]rifiable [S]ecret-sharing (EByFTVeS) scheme. Finally, the validity, liveness, consistency and privacy of the EByFTVeS scheme are theoretically analyzed, while the efficiency of the EByFTVeS scheme outperforms that of the-state-of-art VSS scheme according to comparative experiment results.
Related papers
- From SFT to RL: Demystifying the Post-Training Pipeline for LLM-based Vulnerability Detection [2.6678231901651723]
We present the first comprehensive investigation into the post-training pipeline for LLM-based vulnerability detection.<n>Our study identifies practical guidelines and insights, including how data curation, stage interactions, reward mechanisms, and evaluation protocols collectively dictate the efficacy of model training and assessment.
arXiv Detail & Related papers (2026-02-15T06:33:25Z) - Toward a Sustainable Federated Learning Ecosystem: A Practical Least Core Mechanism for Payoff Allocation [71.86087908416255]
We introduce a payoff allocation framework based on the least core (LC) concept.<n>Unlike traditional methods, the LC prioritizes the cohesion of the federation by minimizing the maximum dissatisfaction.<n>Case studies in federated intrusion detection demonstrate that our mechanism correctly identifies pivotal contributors and strategic alliances.
arXiv Detail & Related papers (2026-02-03T11:10:50Z) - PD-Loss: Proxy-Decidability for Efficient Metric Learning [0.03891635937589588]
We introduce Proxy-Decidability Loss (PD-Loss), a novel objective that integrates learnable proxies with the statistical framework of d' to optimize embedding spaces efficiently.<n>PD-Loss achieves performance comparable to that of state-of-the-art methods while introducing a new perspective on embedding optimization.
arXiv Detail & Related papers (2025-08-23T16:46:00Z) - DP2Guard: A Lightweight and Byzantine-Robust Privacy-Preserving Federated Learning Scheme for Industrial IoT [37.44256772381154]
DP2Guard is a lightweight PPFL framework that enhances both privacy and robustness.<n>A hybrid defense strategy is proposed, which extracts gradient features using singular value decomposition and cosine similarity.<n>A trust score-based adaptive aggregation scheme adjusts client weights according to historical behavior.
arXiv Detail & Related papers (2025-07-22T01:06:39Z) - Byzantine-Resilient Federated Learning via Distributed Optimization [3.2075234058213757]
Byzantine attacks present a critical challenge to Federated Learning (FL)<n>Traditional FL frameworks rely on aggregation-based protocols for model updates, leaving them vulnerable to sophisticated adversarial strategies.<n>We show that the Primal-Dual Method of Multipliers (PDMM) inherently mitigates Byzantine impacts by leveraging its fault-tolerant consensus mechanism.
arXiv Detail & Related papers (2025-03-13T18:34:42Z) - Revisiting Locally Differentially Private Protocols: Towards Better Trade-offs in Privacy, Utility, and Attack Resistance [4.5282933786221395]
Local Differential Privacy (LDP) offers strong privacy protection, especially in settings in which the server collecting the data is untrusted.<n>We introduce a general multi-objective optimization framework for refining LDP protocols.<n>Our framework enables modular and context-aware deployment of LDP mechanisms with tunable privacy-utility trade-offs.
arXiv Detail & Related papers (2025-03-03T12:41:01Z) - Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.<n>To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.<n>Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks [90.52808174102157]
In safety-critical applications such as medical imaging and autonomous driving, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks.
A notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models.
This study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks.
arXiv Detail & Related papers (2024-05-14T18:05:19Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Mimicking Better by Matching the Approximate Action Distribution [48.95048003354255]
We introduce MAAD, a novel, sample-efficient on-policy algorithm for Imitation Learning from Observations.
We show that it requires considerable fewer interactions to achieve expert performance, outperforming current state-of-the-art on-policy methods.
arXiv Detail & Related papers (2023-06-16T12:43:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.