RQP-SGD: Differential Private Machine Learning through Noisy SGD and
Randomized Quantization
- URL: http://arxiv.org/abs/2402.06606v1
- Date: Fri, 9 Feb 2024 18:34:08 GMT
- Title: RQP-SGD: Differential Private Machine Learning through Noisy SGD and
Randomized Quantization
- Authors: Ce Feng, Parv Venkitasubramaniam
- Abstract summary: We present RQP-SGD, a new approach for privacy-preserving quantization to train machine learning models.
This approach combines differentially private gradient descent with randomized quantization, providing a measurable privacy guarantee.
- Score: 8.04975023021212
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rise of IoT devices has prompted the demand for deploying machine
learning at-the-edge with real-time, efficient, and secure data processing. In
this context, implementing machine learning (ML) models with real-valued weight
parameters can prove to be impractical particularly for large models, and there
is a need to train models with quantized discrete weights. At the same time,
these low-dimensional models also need to preserve privacy of the underlying
dataset. In this work, we present RQP-SGD, a new approach for
privacy-preserving quantization to train machine learning models for low-memory
ML-at-the-edge. This approach combines differentially private stochastic
gradient descent (DP-SGD) with randomized quantization, providing a measurable
privacy guarantee in machine learning. In particular, we study the utility
convergence of implementing RQP-SGD on ML tasks with convex objectives and
quantization constraints and demonstrate its efficacy over deterministic
quantization. Through experiments conducted on two datasets, we show the
practical effectiveness of RQP-SGD.
Related papers
- Learning Density Functionals from Noisy Quantum Data [0.0]
noisy intermediate-scale quantum (NISQ) devices are used to generate training data for machine learning (ML) models.
We show that a neural-network ML model can successfully generalize from small datasets subject to noise typical of NISQ algorithms.
Our findings suggest a promising pathway for leveraging NISQ devices in practical quantum simulations.
arXiv Detail & Related papers (2024-09-04T17:59:55Z) - Discrete Randomized Smoothing Meets Quantum Computing [40.54768963869454]
We show how to encode all the perturbations of the input binary data in superposition and use Quantum Amplitude Estimation (QAE) to obtain a quadratic reduction in the number of calls to the model.
In addition, we propose a new binary threat model to allow for an extensive evaluation of our approach on images, graphs, and text.
arXiv Detail & Related papers (2024-08-01T20:21:52Z) - Higher order quantum reservoir computing for non-intrusive reduced-order models [0.0]
Quantum reservoir computing technique (QRC) is a hybrid quantum-classical framework employing an ensemble of interconnected small quantum systems.
We show that QRC is able to predict complex nonlinear dynamical systems in a stable and accurate manner.
arXiv Detail & Related papers (2024-07-31T13:37:04Z) - A Quantization-based Technique for Privacy Preserving Distributed Learning [2.2139875218234475]
We describe a novel, regulation-compliant data protection technique for the distributed training of Machine Learning models.
Our method protects both training data and ML model parameters by employing a protocol based on a quantized multi-hash data representation Hash-Comb combined with randomization.
arXiv Detail & Related papers (2024-06-26T14:54:12Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors [68.8204255655161]
This paper compares four state-of-the-art algorithms in two real applications: gesture recognition based on accelerometer data and image classification.
Our results confirm these systems' reliability and the feasibility of deploying them in tiny-memory MCUs.
arXiv Detail & Related papers (2022-09-01T17:05:20Z) - Physics-informed machine learning with differentiable programming for
heterogeneous underground reservoir pressure management [64.17887333976593]
Avoiding over-pressurization in subsurface reservoirs is critical for applications like CO2 sequestration and wastewater injection.
Managing the pressures by controlling injection/extraction are challenging because of complex heterogeneity in the subsurface.
We use differentiable programming with a full-physics model and machine learning to determine the fluid extraction rates that prevent over-pressurization.
arXiv Detail & Related papers (2022-06-21T20:38:13Z) - Quantum-tailored machine-learning characterization of a superconducting
qubit [50.591267188664666]
We develop an approach to characterize the dynamics of a quantum device and learn device parameters.
This approach outperforms physics-agnostic recurrent neural networks trained on numerically generated and experimental data.
This demonstration shows how leveraging domain knowledge improves the accuracy and efficiency of this characterization task.
arXiv Detail & Related papers (2021-06-24T15:58:57Z) - Quantum machine learning with differential privacy [3.2442879131520126]
We develop a hybrid quantum-classical model that is trained to preserve privacy using differentially private optimization algorithm.
Experiments demonstrate that differentially private QML can protect user-sensitive information without diminishing model accuracy.
arXiv Detail & Related papers (2021-03-10T18:06:15Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.