Private and Reliable Neural Network Inference
- URL: http://arxiv.org/abs/2210.15614v1
- Date: Thu, 27 Oct 2022 16:58:45 GMT
- Title: Private and Reliable Neural Network Inference
- Authors: Nikola Jovanovi\'c, Marc Fischer, Samuel Steffen, Martin Vechev
- Abstract summary: We present the first system which enables privacy-preserving inference on reliable NNs.
We employ these building blocks to enable privacy-preserving NN inference with robustness and fairness guarantees in a system called Phoenix.
- Score: 6.7386666699567845
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reliable neural networks (NNs) provide important inference-time reliability
guarantees such as fairness and robustness. Complementarily, privacy-preserving
NN inference protects the privacy of client data. So far these two emerging
areas have been largely disconnected, yet their combination will be
increasingly important. In this work, we present the first system which enables
privacy-preserving inference on reliable NNs. Our key idea is to design
efficient fully homomorphic encryption (FHE) counterparts for the core
algorithmic building blocks of randomized smoothing, a state-of-the-art
technique for obtaining reliable models. The lack of required control flow in
FHE makes this a demanding task, as na\"ive solutions lead to unacceptable
runtime. We employ these building blocks to enable privacy-preserving NN
inference with robustness and fairness guarantees in a system called Phoenix.
Experimentally, we demonstrate that Phoenix achieves its goals without
incurring prohibitive latencies. To our knowledge, this is the first work which
bridges the areas of client data privacy and reliability guarantees for NNs.
Related papers
- Collaborative Inference over Wireless Channels with Feature Differential Privacy [57.68286389879283]
Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
arXiv Detail & Related papers (2024-10-25T18:11:02Z) - OATH: Efficient and Flexible Zero-Knowledge Proofs of End-to-End ML Fairness [13.986886689256128]
Zero-Knowledge Proofs of Fairness address fairness noncompliance by allowing a service provider to verify that their model serves diverse demographics equitably.
We present OATH, a framework that is deployably efficient with client-facing communication and an offline audit phase.
OATH provides a 1343x improvement to runtime over previous work for neural network ZKPoF, and scales up to much larger models.
arXiv Detail & Related papers (2024-09-17T16:00:35Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - Unveiling the Role of Message Passing in Dual-Privacy Preservation on
GNNs [7.626349365968476]
Graph Neural Networks (GNNs) are powerful tools for learning representations on graphs, such as social networks.
Privacy-preserving GNNs have been proposed, focusing on preserving node and/or link privacy.
We propose a principled privacy-preserving GNN framework that effectively safeguards both node and link privacy.
arXiv Detail & Related papers (2023-08-25T17:46:43Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - Unraveling Privacy Risks of Individual Fairness in Graph Neural Networks [66.0143583366533]
Graph neural networks (GNNs) have gained significant attraction due to their expansive real-world applications.
To build trustworthy GNNs, two aspects - fairness and privacy - have emerged as critical considerations.
Previous studies have separately examined the fairness and privacy aspects of GNNs, revealing their trade-off with GNN performance.
Yet, the interplay between these two aspects remains unexplored.
arXiv Detail & Related papers (2023-01-30T14:52:23Z) - PRECAD: Privacy-Preserving and Robust Federated Learning via
Crypto-Aided Differential Privacy [14.678119872268198]
Federated Learning (FL) allows multiple participating clients to train machine learning models collaboratively by keeping their datasets local and only exchanging model updates.
Existing FL protocol designs have been shown to be vulnerable to attacks that aim to compromise data privacy and/or model robustness.
We develop a framework called PRECAD, which simultaneously achieves differential privacy (DP) and enhances robustness against model poisoning attacks with the help of cryptography.
arXiv Detail & Related papers (2021-10-22T04:08:42Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Federated Neural Collaborative Filtering [0.0]
We present a federated version of the state-of-the-art Neural Collaborative Filtering (NCF) approach for item recommendations.
The system, named FedNCF, allows learning without requiring users to expose or transmit their raw data.
We discuss the peculiarities observed in the application of FL in a collaborative filtering (CF) task as well as we evaluate the privacy-preserving mechanism in terms of computational cost.
arXiv Detail & Related papers (2021-06-02T21:05:41Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.