PraxiMLP: A Threshold-based Framework for Efficient Three-Party MLP with Practical Security
- URL: http://arxiv.org/abs/2511.06104v1
- Date: Sat, 08 Nov 2025 18:56:26 GMT
- Title: PraxiMLP: A Threshold-based Framework for Efficient Three-Party MLP with Practical Security
- Authors: Tianle Tao, Shizhao Peng, Haogang Zhu,
- Abstract summary: PraxiMLP is a highly efficient three-party framework for privacy-Preserving Machine Learning (PPML)<n>PraxiMLP operates entirely within the arithmetic domain, thus avoiding expensive cross-domain conversions.<n>By supporting loating-point numbers, PraxiMLP precisely handles non-linear functions, dramatically improving both efficiency and precision.
- Score: 3.0489147795290683
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efficiency and communication cost remain critical bottlenecks for practical Privacy-Preserving Machine Learning (PPML). Most existing frameworks rely on fixed-point arithmetic for strong security, which introduces significant precision loss and requires expensive cross-domain conversions (e.g., Arithmetic-to-Boolean) for non-linear operations. To address this, we propose PraxiMLP, a highly efficient three-party MLP framework grounded in practical security. The core of our work is a pair of novel additive-to-multiplicative conversion protocols that operate entirely within the arithmetic domain, thus avoiding expensive cross-domain conversions. By natively supporting loating-point numbers, PraxiMLP precisely handles non-linear functions, dramatically improving both efficiency and precision. Experimental results confirm that, compared to mainstream PPML frameworks, PraxiMLP delivers an average 8 orders of magnitude precision improvement on basic protocols and a 5x average model training speedup in a WAN environment.
Related papers
- POP: Prefill-Only Pruning for Efficient Large Model Inference [5.743318651374061]
Large Language Models (LLMs) and Vision-Language Models (VLMs) have demonstrated remarkable capabilities.<n>Existing structured pruning methods, while hardware-efficient, often suffer from significant accuracy degradation.<n>We argue that this failure stems from a stage-agnostic pruning approach that overlooks the asymmetric roles between the prefill and decode stages.
arXiv Detail & Related papers (2026-02-03T09:22:26Z) - Beyond the Ideal: Analyzing the Inexact Muon Update [54.70108543057578]
We show first analysis of the inexactized update at Muon's core.<n>We reveal a fundamental coupling between this inexactness and the optimal step size and momentum.
arXiv Detail & Related papers (2025-10-22T18:01:07Z) - Peering Inside the Black Box: Uncovering LLM Errors in Optimization Modelling through Component-Level Evaluation [0.0]
We present a component-level evaluation framework for large language models (LLMs)<n>We evaluate GPT-5, LLaMA 3.1 Instruct, and DeepSeek Math across optimization problems of varying complexity.<n>Results show GPT-5 consistently outperforms other models, with chain-of-thought, self-consistency, and modular prompting proving most effective.
arXiv Detail & Related papers (2025-10-19T17:47:59Z) - Efficient Private Inference Based on Helper-Assisted Malicious Security Dishonest Majority MPC [5.797285315996385]
We propose a novel, three-layer private inference framework based on the Helper-Assisted MSDM model.<n>The framework achieves up to a 2.4-25.7x speedup in LAN and a 1.3-9.5x acceleration in WAN over the state-of-the-art MSDM frameworks.
arXiv Detail & Related papers (2025-07-13T12:24:02Z) - EVA-S2PLoR: Decentralized Secure 2-party Logistic Regression with A Subtly Hadamard Product Protocol (Full Version) [2.1010315462623184]
This paper proposes an efficient, verifiable, and accurate security 2-party logistic regression framework (EVA-S2PLoR)<n>It achieves accurate nonlinear function through a subtly secure hadamard product protocol and its derived protocols.<n>High efficiency and precision are guaranteed by the asynchronous flow on floating point numbers and the few number of fixed communication rounds.
arXiv Detail & Related papers (2025-01-09T13:19:59Z) - Progressive Mixed-Precision Decoding for Efficient LLM Inference [49.05448842542558]
We introduce Progressive Mixed-Precision Decoding (PMPD) to address the memory-boundedness of decoding.<n>PMPD achieves 1.4$-$12.2$times$ speedup in matrix-vector multiplications over fp16 models.<n>Our approach delivers a throughput gain of 3.8$-$8.0$times$ over fp16 models and up to 1.54$times$ over uniform quantization approaches.
arXiv Detail & Related papers (2024-10-17T11:46:33Z) - Parameter and Computation Efficient Transfer Learning for
Vision-Language Pre-trained Models [79.34513906324727]
In this paper, we aim at parameter and efficient transfer learning (PCETL) for vision-language pre-trained models.
We propose a novel dynamic architecture skipping (DAS) approach towards effective PCETL.
arXiv Detail & Related papers (2023-09-04T09:34:33Z) - Approximated Prompt Tuning for Vision-Language Pre-trained Models [54.326232586461614]
In vision-language pre-trained models, prompt tuning often requires a large number of learnable tokens to bridge the gap between the pre-training and downstream tasks.
We propose a novel Approximated Prompt Tuning (APT) approach towards efficient VL transfer learning.
arXiv Detail & Related papers (2023-06-27T05:43:47Z) - Non-stationary Reinforcement Learning under General Function
Approximation [60.430936031067006]
We first propose a new complexity metric called dynamic Bellman Eluder (DBE) dimension for non-stationary MDPs.
Based on the proposed complexity metric, we propose a novel confidence-set based model-free algorithm called SW-OPEA.
We show that SW-OPEA is provably efficient as long as the variation budget is not significantly large.
arXiv Detail & Related papers (2023-06-01T16:19:37Z) - LoRAPrune: Structured Pruning Meets Low-Rank Parameter-Efficient Fine-Tuning [56.88751562302793]
Low-rank adaption (LoRA) has emerged to fine-tune large language models (LLMs)
LoRAPrune is a new framework that delivers an accurate structured pruned model in a highly memory-efficient manner.
LoRAPrune achieves a reduction in perplexity by 4.81 on WikiText2 and 3.46 on PTB, while also decreasing memory usage by 52.6%.
arXiv Detail & Related papers (2023-05-28T15:15:48Z) - Large-scale Optimization of Partial AUC in a Range of False Positive
Rates [51.12047280149546]
The area under the ROC curve (AUC) is one of the most widely used performance measures for classification models in machine learning.
We develop an efficient approximated gradient descent method based on recent practical envelope smoothing technique.
Our proposed algorithm can also be used to minimize the sum of some ranked range loss, which also lacks efficient solvers.
arXiv Detail & Related papers (2022-03-03T03:46:18Z) - Adam in Private: Secure and Fast Training of Deep Neural Networks with
Adaptive Moment Estimation [6.342794803074475]
We propose a framework that allows efficient evaluation of full-fledged state-of-the-art machine learning algorithms.
This is in contrast to most prior works, which substitute ML algorithms with approximated "MPC-friendly" variants.
We obtain secure training that outperforms state-of-the-art three-party systems.
arXiv Detail & Related papers (2021-06-04T01:40:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.