Prive-HD: Privacy-Preserved Hyperdimensional Computing
- URL: http://arxiv.org/abs/2005.06716v1
- Date: Thu, 14 May 2020 04:19:34 GMT
- Title: Prive-HD: Privacy-Preserved Hyperdimensional Computing
- Authors: Behnam Khaleghi, Mohsen Imani, Tajana Rosing
- Abstract summary: Hyperdimensional (HD) computing is gaining traction due to its light-weight computation and robustness.
We present an accuracy-privacy trade-off method to realize a differentially private model and to obfuscate the information sent for cloud-hosted inference.
- Score: 18.512391787497673
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The privacy of data is a major challenge in machine learning as a trained
model may expose sensitive information of the enclosed dataset. Besides, the
limited computation capability and capacity of edge devices have made
cloud-hosted inference inevitable. Sending private information to remote
servers makes the privacy of inference also vulnerable because of susceptible
communication channels or even untrustworthy hosts. In this paper, we target
privacy-preserving training and inference of brain-inspired Hyperdimensional
(HD) computing, a new learning algorithm that is gaining traction due to its
light-weight computation and robustness particularly appealing for edge devices
with tight constraints. Indeed, despite its promising attributes, HD computing
has virtually no privacy due to its reversible computation. We present an
accuracy-privacy trade-off method through meticulous quantization and pruning
of hypervectors, the building blocks of HD, to realize a differentially private
model as well as to obfuscate the information sent for cloud-hosted inference.
Finally, we show how the proposed techniques can be also leveraged for
efficient hardware implementation.
Related papers
- Collaborative Inference over Wireless Channels with Feature Differential Privacy [57.68286389879283]
Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
arXiv Detail & Related papers (2024-10-25T18:11:02Z) - Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Lancelot: Towards Efficient and Privacy-Preserving Byzantine-Robust Federated Learning within Fully Homomorphic Encryption [10.685816010576918]
We propose Lancelot, an innovative and computationally efficient BRFL framework that employs fully homomorphic encryption (FHE) to safeguard against malicious client activities while preserving data privacy.
Our extensive testing, which includes medical imaging diagnostics and widely-used public image datasets, demonstrates that Lancelot significantly outperforms existing methods, offering more than a twenty-fold increase in processing speed, all while maintaining data privacy.
arXiv Detail & Related papers (2024-08-12T14:48:25Z) - VeriSplit: Secure and Practical Offloading of Machine Learning Inferences across IoT Devices [31.247069150077632]
Many Internet-of-Things (IoT) devices rely on cloud computation resources to perform machine learning inferences.
This is expensive and may raise privacy concerns for users.
We propose VeriSplit, a framework for offloading machine learning inferences to locally-available devices.
arXiv Detail & Related papers (2024-06-02T01:28:38Z) - Ungeneralizable Examples [70.76487163068109]
Current approaches to creating unlearnable data involve incorporating small, specially designed noises.
We extend the concept of unlearnable data to conditional data learnability and introduce textbfUntextbfGeneralizable textbfExamples (UGEs)
UGEs exhibit learnability for authorized users while maintaining unlearnability for potential hackers.
arXiv Detail & Related papers (2024-04-22T09:29:14Z) - Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining [75.25943383604266]
We question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving.
We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy.
We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
arXiv Detail & Related papers (2022-12-13T10:41:12Z) - A Survey on Differential Privacy with Machine Learning and Future
Outlook [0.0]
differential privacy is used to protect machine learning models from any attacks and vulnerabilities.
This survey paper presents different differentially private machine learning algorithms categorized into two main categories.
arXiv Detail & Related papers (2022-11-19T14:20:53Z) - DarKnight: An Accelerated Framework for Privacy and Integrity Preserving
Deep Learning Using Trusted Hardware [3.1853566662905943]
DarKnight is a framework for large DNN training while protecting input privacy and integrity.
DarKnight relies on cooperative execution between trusted execution environments (TEE) and accelerators.
DarKnight's data obfuscation strategy provides provable data privacy and computation integrity in the cloud servers.
arXiv Detail & Related papers (2022-06-30T19:58:36Z) - SoK: Privacy-preserving Deep Learning with Homomorphic Encryption [2.9069679115858755]
homomorphic encryption (HE) can be performed on encrypted data without revealing its content.
We take an in-depth look at approaches that combine neural networks with HE for privacy preservation.
We find numerous challenges to HE based privacy-preserving deep learning such as computational overhead, usability, and limitations posed by the encryption schemes.
arXiv Detail & Related papers (2021-12-23T22:03:27Z) - Reinforcement Learning on Encrypted Data [58.39270571778521]
We present a preliminary, experimental study of how a DQN agent trained on encrypted states performs in environments with discrete and continuous state spaces.
Our results highlight that the agent is still capable of learning in small state spaces even in presence of non-deterministic encryption, but performance collapses in more complex environments.
arXiv Detail & Related papers (2021-09-16T21:59:37Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.