EVA-S2PMLP: Secure and Scalable Two-Party MLP via Spatial Transformation
- URL: http://arxiv.org/abs/2506.15102v1
- Date: Wed, 18 Jun 2025 03:18:35 GMT
- Title: EVA-S2PMLP: Secure and Scalable Two-Party MLP via Spatial Transformation
- Authors: Shizhao Peng, Shoumo Li, Tianle Tao,
- Abstract summary: This paper presents textbfEVA-S2PMLP, an Efficient, Verifiable, and Accurate Secure Two-Party Multi-Layer Perceptron framework.<n> EVA-S2PMLP achieves high inference accuracy and significantly reduced communication overhead, with up to $12.3times$ improvement over baselines.<n>It is a practical solution for privacy-preserving neural network training in finance, healthcare, and cross-organizational AI applications.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Privacy-preserving neural network training in vertically partitioned scenarios is vital for secure collaborative modeling across institutions. This paper presents \textbf{EVA-S2PMLP}, an Efficient, Verifiable, and Accurate Secure Two-Party Multi-Layer Perceptron framework that introduces spatial-scale optimization for enhanced privacy and performance. To enable reliable computation under real-number domain, EVA-S2PMLP proposes a secure transformation pipeline that maps scalar inputs to vector and matrix spaces while preserving correctness. The framework includes a suite of atomic protocols for linear and non-linear secure computations, with modular support for secure activation, matrix-vector operations, and loss evaluation. Theoretical analysis confirms the reliability, security, and asymptotic complexity of each protocol. Extensive experiments show that EVA-S2PMLP achieves high inference accuracy and significantly reduced communication overhead, with up to $12.3\times$ improvement over baselines. Evaluation on benchmark datasets demonstrates that the framework maintains model utility while ensuring strict data confidentiality, making it a practical solution for privacy-preserving neural network training in finance, healthcare, and cross-organizational AI applications.
Related papers
- Generalized Linear Bandits: Almost Optimal Regret with One-Pass Update [60.414548453838506]
We study the generalized linear bandit (GLB) problem, a contextual multi-armed bandit framework that extends the classical linear model by incorporating a non-linear link function.<n>GLBs are widely applicable to real-world scenarios, but their non-linear nature introduces significant challenges in achieving both computational and statistical efficiency.<n>We propose a jointly efficient algorithm that attains a nearly optimal regret bound with $mathcalO(1)$ time and space complexities per round.
arXiv Detail & Related papers (2025-07-16T02:24:21Z) - Secure Distributed Learning for CAVs: Defending Against Gradient Leakage with Leveled Homomorphic Encryption [0.0]
Homomorphic Encryption (HE) offers a promising alternative to Differential Privacy (DP) and Secure Multi-Party Computation (SMPC)<n>We evaluate various HE schemes to identify the most suitable for Federated Learning (FL) in resource-constrained environments.<n>We develop a full HE-based FL pipeline that effectively mitigates Deep Leakage from Gradients (DLG) attacks while preserving model accuracy.
arXiv Detail & Related papers (2025-06-09T16:12:18Z) - Data-Driven Calibration of Prediction Sets in Large Vision-Language Models Based on Inductive Conformal Prediction [0.0]
We propose a model-agnostic uncertainty quantification method that integrates dynamic threshold calibration and cross-modal consistency verification.<n>We show that the framework achieves stable performance across varying calibration-to-test split ratios, underscoring its robustness for real-world deployment in healthcare, autonomous systems, and other safety-sensitive domains.<n>This work bridges the gap between theoretical reliability and practical applicability in multi-modal AI systems, offering a scalable solution for hallucination detection and uncertainty-aware decision-making.
arXiv Detail & Related papers (2025-04-24T15:39:46Z) - EVA-S2PLoR: A Secure Element-wise Multiplication Meets Logistic Regression on Heterogeneous Database [2.1010315462623184]
This paper proposes an efficient, verifiable and accurate security 2-party logistic regression framework (EVA-S2PLoR)<n>Our framework primarily includes secure 2-party vector element-wise multiplication, addition to multiplication, reciprocal, and sigmoid function based on data disguising technology.
arXiv Detail & Related papers (2025-01-09T13:19:59Z) - The Communication-Friendly Privacy-Preserving Machine Learning against Malicious Adversaries [14.232901861974819]
Privacy-preserving machine learning (PPML) is an innovative approach that allows for secure data analysis while safeguarding sensitive information.
We introduce efficient protocol for secure linear function evaluation.
We extend the protocol to handle linear and non-linear layers, ensuring compatibility with a wide range of machine-learning models.
arXiv Detail & Related papers (2024-11-14T08:55:14Z) - EVA-S3PC: Efficient, Verifiable, Accurate Secure Matrix Multiplication Protocol Assembly and Its Application in Regression [6.706306851710546]
EVA-S3PC achieves up to 14 significant decimal digits of precision in Float64 calculations.
3-party regression models trained using EVA-S3PC on vertically partitioned data achieve accuracy nearly identical to plaintext training.
arXiv Detail & Related papers (2024-11-05T18:38:44Z) - Towards Continual Learning Desiderata via HSIC-Bottleneck
Orthogonalization and Equiangular Embedding [55.107555305760954]
We propose a conceptually simple yet effective method that attributes forgetting to layer-wise parameter overwriting and the resulting decision boundary distortion.
Our method achieves competitive accuracy performance, even with absolute superiority of zero exemplar buffer and 1.02x the base model.
arXiv Detail & Related papers (2024-01-17T09:01:29Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - Receptive Field-based Segmentation for Distributed CNN Inference
Acceleration in Collaborative Edge Computing [93.67044879636093]
We study inference acceleration using distributed convolutional neural networks (CNNs) in collaborative edge computing network.
We propose a novel collaborative edge computing using fused-layer parallelization to partition a CNN model into multiple blocks of convolutional layers.
arXiv Detail & Related papers (2022-07-22T18:38:11Z) - Log Barriers for Safe Black-box Optimization with Application to Safe
Reinforcement Learning [72.97229770329214]
We introduce a general approach for seeking high dimensional non-linear optimization problems in which maintaining safety during learning is crucial.
Our approach called LBSGD is based on applying a logarithmic barrier approximation with a carefully chosen step size.
We demonstrate the effectiveness of our approach on minimizing violation in policy tasks in safe reinforcement learning.
arXiv Detail & Related papers (2022-07-21T11:14:47Z) - Higher Performance Visual Tracking with Dual-Modal Localization [106.91097443275035]
Visual Object Tracking (VOT) has synchronous needs for both robustness and accuracy.
We propose a dual-modal framework for target localization, consisting of robust localization suppressingors via ONR and the accurate localization attending to the target center precisely via OFC.
arXiv Detail & Related papers (2021-03-18T08:47:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.