Towards Privacy-Preserving Neural Architecture Search
- URL: http://arxiv.org/abs/2204.10958v1
- Date: Fri, 22 Apr 2022 23:44:45 GMT
- Title: Towards Privacy-Preserving Neural Architecture Search
- Authors: Fuyi Wang and Leo Yu Zhang and Lei Pan and Shengshan Hu and Robin Doss
- Abstract summary: PP-NAS is a privacy-preserving neural architecture search framework based on secure multi-party computation.
PP-NAS outsources the NAS task to two non-colluding cloud servers for making full advantage of mixed protocols design.
We develop a new alternative to approximate the Softmax function over secret shares, which bypasses the limitation of approximating exponential operations in Softmax while improving accuracy.
- Score: 7.895707607608013
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Machine learning promotes the continuous development of signal processing in
various fields, including network traffic monitoring, EEG classification, face
identification, and many more. However, massive user data collected for
training deep learning models raises privacy concerns and increases the
difficulty of manually adjusting the network structure. To address these
issues, we propose a privacy-preserving neural architecture search (PP-NAS)
framework based on secure multi-party computation to protect users' data and
the model's parameters/hyper-parameters. PP-NAS outsources the NAS task to two
non-colluding cloud servers for making full advantage of mixed protocols
design. Complement to the existing PP machine learning frameworks, we redesign
the secure ReLU and Max-pooling garbled circuits for significantly better
efficiency ($3 \sim 436$ times speed-up). We develop a new alternative to
approximate the Softmax function over secret shares, which bypasses the
limitation of approximating exponential operations in Softmax while improving
accuracy. Extensive analyses and experiments demonstrate PP-NAS's superiority
in security, efficiency, and accuracy.
Related papers
- PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Digital Twin-Assisted Data-Driven Optimization for Reliable Edge Caching in Wireless Networks [60.54852710216738]
We introduce a novel digital twin-assisted optimization framework, called D-REC, to ensure reliable caching in nextG wireless networks.
By incorporating reliability modules into a constrained decision process, D-REC can adaptively adjust actions, rewards, and states to comply with advantageous constraints.
arXiv Detail & Related papers (2024-06-29T02:40:28Z) - RRNet: Towards ReLU-Reduced Neural Network for Two-party Computation
Based Private Inference [17.299835585861747]
We introduce RRNet, a framework that aims to jointly reduce the overhead of MPC comparison protocols and accelerate computation through hardware acceleration.
Our approach integrates the hardware latency of cryptographic building blocks into the DNN loss function, resulting in improved energy efficiency, accuracy, and security guarantees.
arXiv Detail & Related papers (2023-02-05T04:02:13Z) - Lightweight Neural Architecture Search for Temporal Convolutional
Networks at the Edge [21.72253397805102]
This work focuses in particular on Temporal Convolutional Networks (TCNs), a convolutional model for time-series processing.
We propose the first NAS tool that explicitly targets the optimization of the most peculiar architectural parameters of TCNs.
We test the proposed NAS on four real-world, edge-relevant tasks, involving audio and bio-signals.
arXiv Detail & Related papers (2023-01-24T19:47:40Z) - Distributed Reinforcement Learning for Privacy-Preserving Dynamic Edge
Caching [91.50631418179331]
A privacy-preserving distributed deep policy gradient (P2D3PG) is proposed to maximize the cache hit rates of devices in the MEC networks.
We convert the distributed optimizations into model-free Markov decision process problems and then introduce a privacy-preserving federated learning method for popularity prediction.
arXiv Detail & Related papers (2021-10-20T02:48:27Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - GECKO: Reconciling Privacy, Accuracy and Efficiency in Embedded Deep
Learning [5.092028049119383]
We analyse the three-dimensional privacy-accuracy-efficiency tradeoff in NNs for IoT devices.
We propose Gecko training methodology where we explicitly add resistance to private inferences as a design objective.
arXiv Detail & Related papers (2020-10-02T10:36:55Z) - ESMFL: Efficient and Secure Models for Federated Learning [28.953644581089495]
We propose a privacy-preserving method for the federated learning distributed system, operated on Intel Software Guard Extensions.
We reduce the commutation cost by sparsification and it can achieve reasonable accuracy with different model architectures.
arXiv Detail & Related papers (2020-09-03T18:27:32Z) - A Privacy-Preserving-Oriented DNN Pruning and Mobile Acceleration
Framework [56.57225686288006]
Weight pruning of deep neural networks (DNNs) has been proposed to satisfy the limited storage and computing capability of mobile edge devices.
Previous pruning methods mainly focus on reducing the model size and/or improving performance without considering the privacy of user data.
We propose a privacy-preserving-oriented pruning and mobile acceleration framework that does not require the private training dataset.
arXiv Detail & Related papers (2020-03-13T23:52:03Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z) - NASS: Optimizing Secure Inference via Neural Architecture Search [21.72469549507192]
We propose NASS, an integrated framework to search for tailored NN architectures designed specifically for secure inference (SI)
We show that we can achieve the best of both worlds by using NASS, where the prediction accuracy can be improved from 81.6% to 84.6%, while the inference runtime is reduced by 2x and communication bandwidth by 1.9x on the CIFAR-10 dataset.
arXiv Detail & Related papers (2020-01-30T06:37:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.