Prive-HD: Privacy-Preserved Hyperdimensional Computing
- URL: http://arxiv.org/abs/2005.06716v1
- Date: Thu, 14 May 2020 04:19:34 GMT
- Title: Prive-HD: Privacy-Preserved Hyperdimensional Computing
- Authors: Behnam Khaleghi, Mohsen Imani, Tajana Rosing
- Abstract summary: Hyperdimensional (HD) computing is gaining traction due to its light-weight computation and robustness.
We present an accuracy-privacy trade-off method to realize a differentially private model and to obfuscate the information sent for cloud-hosted inference.
- Score: 18.512391787497673
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The privacy of data is a major challenge in machine learning as a trained
model may expose sensitive information of the enclosed dataset. Besides, the
limited computation capability and capacity of edge devices have made
cloud-hosted inference inevitable. Sending private information to remote
servers makes the privacy of inference also vulnerable because of susceptible
communication channels or even untrustworthy hosts. In this paper, we target
privacy-preserving training and inference of brain-inspired Hyperdimensional
(HD) computing, a new learning algorithm that is gaining traction due to its
light-weight computation and robustness particularly appealing for edge devices
with tight constraints. Indeed, despite its promising attributes, HD computing
has virtually no privacy due to its reversible computation. We present an
accuracy-privacy trade-off method through meticulous quantization and pruning
of hypervectors, the building blocks of HD, to realize a differentially private
model as well as to obfuscate the information sent for cloud-hosted inference.
Finally, we show how the proposed techniques can be also leveraged for
efficient hardware implementation.
Related papers
- VeriSplit: Secure and Practical Offloading of Machine Learning Inferences across IoT Devices [31.247069150077632]
Many Internet-of-Things (IoT) devices rely on cloud computation resources to perform machine learning inferences.
This is expensive and may raise privacy concerns for users.
We propose VeriSplit, a framework for offloading machine learning inferences to locally-available devices.
arXiv Detail & Related papers (2024-06-02T01:28:38Z) - Ungeneralizable Examples [70.76487163068109]
Current approaches to creating unlearnable data involve incorporating small, specially designed noises.
We extend the concept of unlearnable data to conditional data learnability and introduce textbfUntextbfGeneralizable textbfExamples (UGEs)
UGEs exhibit learnability for authorized users while maintaining unlearnability for potential hackers.
arXiv Detail & Related papers (2024-04-22T09:29:14Z) - Your Room is not Private: Gradient Inversion Attack on Reinforcement
Learning [47.96266341738642]
Privacy emerges as a pivotal concern within the realm of embodied AI, as the robot accesses substantial personal information.
This paper proposes an attack on the value-based algorithm and the gradient-based algorithm, utilizing gradient inversion to reconstruct states, actions, and supervision signals.
arXiv Detail & Related papers (2023-06-15T16:53:26Z) - Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining [75.25943383604266]
We question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving.
We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy.
We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
arXiv Detail & Related papers (2022-12-13T10:41:12Z) - A Survey on Differential Privacy with Machine Learning and Future
Outlook [0.0]
differential privacy is used to protect machine learning models from any attacks and vulnerabilities.
This survey paper presents different differentially private machine learning algorithms categorized into two main categories.
arXiv Detail & Related papers (2022-11-19T14:20:53Z) - DarKnight: An Accelerated Framework for Privacy and Integrity Preserving
Deep Learning Using Trusted Hardware [3.1853566662905943]
DarKnight is a framework for large DNN training while protecting input privacy and integrity.
DarKnight relies on cooperative execution between trusted execution environments (TEE) and accelerators.
DarKnight's data obfuscation strategy provides provable data privacy and computation integrity in the cloud servers.
arXiv Detail & Related papers (2022-06-30T19:58:36Z) - SoK: Privacy-preserving Deep Learning with Homomorphic Encryption [2.9069679115858755]
homomorphic encryption (HE) can be performed on encrypted data without revealing its content.
We take an in-depth look at approaches that combine neural networks with HE for privacy preservation.
We find numerous challenges to HE based privacy-preserving deep learning such as computational overhead, usability, and limitations posed by the encryption schemes.
arXiv Detail & Related papers (2021-12-23T22:03:27Z) - Reinforcement Learning on Encrypted Data [58.39270571778521]
We present a preliminary, experimental study of how a DQN agent trained on encrypted states performs in environments with discrete and continuous state spaces.
Our results highlight that the agent is still capable of learning in small state spaces even in presence of non-deterministic encryption, but performance collapses in more complex environments.
arXiv Detail & Related papers (2021-09-16T21:59:37Z) - Privacy and Integrity Preserving Training Using Trusted Hardware [4.5843599120944605]
DarKnight is a framework for large computation training while protecting input privacy and integrity.
DarKnight relies on cooperative execution between trusted execution environments (TEE) and accelerators.
arXiv Detail & Related papers (2021-05-01T19:33:28Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.