RQP-SGD: Differential Private Machine Learning through Noisy SGD and
Randomized Quantization
- URL: http://arxiv.org/abs/2402.06606v1
- Date: Fri, 9 Feb 2024 18:34:08 GMT
- Title: RQP-SGD: Differential Private Machine Learning through Noisy SGD and
Randomized Quantization
- Authors: Ce Feng, Parv Venkitasubramaniam
- Abstract summary: We present RQP-SGD, a new approach for privacy-preserving quantization to train machine learning models.
This approach combines differentially private gradient descent with randomized quantization, providing a measurable privacy guarantee.
- Score: 8.04975023021212
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rise of IoT devices has prompted the demand for deploying machine
learning at-the-edge with real-time, efficient, and secure data processing. In
this context, implementing machine learning (ML) models with real-valued weight
parameters can prove to be impractical particularly for large models, and there
is a need to train models with quantized discrete weights. At the same time,
these low-dimensional models also need to preserve privacy of the underlying
dataset. In this work, we present RQP-SGD, a new approach for
privacy-preserving quantization to train machine learning models for low-memory
ML-at-the-edge. This approach combines differentially private stochastic
gradient descent (DP-SGD) with randomized quantization, providing a measurable
privacy guarantee in machine learning. In particular, we study the utility
convergence of implementing RQP-SGD on ML tasks with convex objectives and
quantization constraints and demonstrate its efficacy over deterministic
quantization. Through experiments conducted on two datasets, we show the
practical effectiveness of RQP-SGD.
Related papers
- A Quantization-based Technique for Privacy Preserving Distributed Learning [2.2139875218234475]
We describe a novel, regulation-compliant data protection technique for the distributed training of Machine Learning models.
Our method protects both training data and ML model parameters by employing a protocol based on a quantized multi-hash data representation Hash-Comb combined with randomization.
arXiv Detail & Related papers (2024-06-26T14:54:12Z) - QMGeo: Differentially Private Federated Learning via Stochastic Quantization with Mixed Truncated Geometric Distribution [1.565361244756411]
Federated learning (FL) is a framework which allows multiple users to jointly train a global machine learning (ML) model.
One key motivation of such distributed frameworks is to provide privacy guarantees to the users.
We present a novel quantization method, utilizing a mixed geometric distribution to introduce the randomness needed to provide DP.
arXiv Detail & Related papers (2023-12-10T04:44:53Z) - Vertical Layering of Quantized Neural Networks for Heterogeneous
Inference [57.42762335081385]
We study a new vertical-layered representation of neural network weights for encapsulating all quantized models into a single one.
We can theoretically achieve any precision network for on-demand service while only needing to train and maintain one model.
arXiv Detail & Related papers (2022-12-10T15:57:38Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors [68.8204255655161]
This paper compares four state-of-the-art algorithms in two real applications: gesture recognition based on accelerometer data and image classification.
Our results confirm these systems' reliability and the feasibility of deploying them in tiny-memory MCUs.
arXiv Detail & Related papers (2022-09-01T17:05:20Z) - Physics-informed machine learning with differentiable programming for
heterogeneous underground reservoir pressure management [64.17887333976593]
Avoiding over-pressurization in subsurface reservoirs is critical for applications like CO2 sequestration and wastewater injection.
Managing the pressures by controlling injection/extraction are challenging because of complex heterogeneity in the subsurface.
We use differentiable programming with a full-physics model and machine learning to determine the fluid extraction rates that prevent over-pressurization.
arXiv Detail & Related papers (2022-06-21T20:38:13Z) - Quantum-tailored machine-learning characterization of a superconducting
qubit [50.591267188664666]
We develop an approach to characterize the dynamics of a quantum device and learn device parameters.
This approach outperforms physics-agnostic recurrent neural networks trained on numerically generated and experimental data.
This demonstration shows how leveraging domain knowledge improves the accuracy and efficiency of this characterization task.
arXiv Detail & Related papers (2021-06-24T15:58:57Z) - Structural risk minimization for quantum linear classifiers [0.0]
Quantum machine learning (QML) stands out as one of the typically highlighted candidates for quantum computing's near-term "killer application"
We investigate capacity measures of two closely related QML models called explicit and implicit quantum linear classifiers.
We identify that the rank and Frobenius norm of the observables used in the QML model closely control the model's capacity.
arXiv Detail & Related papers (2021-05-12T10:39:55Z) - Quantum machine learning with differential privacy [3.2442879131520126]
We develop a hybrid quantum-classical model that is trained to preserve privacy using differentially private optimization algorithm.
Experiments demonstrate that differentially private QML can protect user-sensitive information without diminishing model accuracy.
arXiv Detail & Related papers (2021-03-10T18:06:15Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z) - Macroscopic Traffic Flow Modeling with Physics Regularized Gaussian
Process: A New Insight into Machine Learning Applications [14.164058812512371]
This study presents a new modeling framework, named physics regularized machine learning (PRML), to encode classical traffic flow models into the machine learning architecture.
To prove the effectiveness of the proposed model, this paper conducts empirical studies on a real-world dataset which is collected from a stretch of I-15 freeway, Utah.
arXiv Detail & Related papers (2020-02-06T17:22:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.