Explainable Differential Privacy-Hyperdimensional Computing for Balancing Privacy and Transparency in Additive Manufacturing Monitoring
- URL: http://arxiv.org/abs/2407.07066v3
- Date: Thu, 14 Nov 2024 20:13:19 GMT
- Title: Explainable Differential Privacy-Hyperdimensional Computing for Balancing Privacy and Transparency in Additive Manufacturing Monitoring
- Authors: Fardin Jalil Piran, Prathyush P. Poduval, Hamza Errahmouni Barkam, Mohsen Imani, Farhad Imani,
- Abstract summary: Differential Privacy (DP) adds mathematically controlled noise to Machine Learning (ML) models.
This study presents the Differential Privacy-Hyperdimensional Computing (DP-HD) framework to quantify noise effects on accuracy.
Experimental results show DP-HD achieves superior operational efficiency, prediction accuracy, and privacy protection.
- Score: 5.282482641822561
- License:
- Abstract: Machine Learning (ML) models combined with in-situ sensing offer a powerful solution to address defect detection challenges in Additive Manufacturing (AM), yet this integration raises critical data privacy concerns, such as data leakage and sensor data compromise, potentially exposing sensitive information about part design and material composition. Differential Privacy (DP), which adds mathematically controlled noise to ML models, provides a way to balance data utility with privacy by concealing identifiable traces from sensor data. However, introducing noise into ML models, especially black-box Artificial Intelligence (AI) models, complicates the prediction of how noise impacts model accuracy. This study presents the Differential Privacy-Hyperdimensional Computing (DP-HD) framework, which leverages Explainable AI (XAI) and the vector symbolic paradigm to quantify noise effects on accuracy. By defining a Signal-to-Noise Ratio (SNR) metric, DP-HD assesses the contribution of training data relative to DP noise, allowing selection of an optimal balance between accuracy and privacy. Experimental results using high-speed melt pool data for anomaly detection in AM demonstrate that DP-HD achieves superior operational efficiency, prediction accuracy, and privacy protection. For instance, with a privacy budget set at 1, DP-HD achieves 94.43% accuracy, outperforming state-of-the-art ML models. Furthermore, DP-HD maintains high accuracy under substantial noise additions to enhance privacy, unlike current models that experience significant accuracy declines under stringent privacy constraints.
Related papers
- Rethinking Improved Privacy-Utility Trade-off with Pre-existing Knowledge for DP Training [31.559864332056648]
We propose a generic differential privacy framework with heterogeneous noise (DP-Hero)
Atop DP-Hero, we instantiate a heterogeneous version of DP-SGD, where the noise injected into gradient updates is heterogeneous and guided by prior-established model parameters.
We conduct comprehensive experiments to verify and explain the effectiveness of the proposed DP-Hero, showing improved training accuracy compared with state-of-the-art works.
arXiv Detail & Related papers (2024-09-05T08:40:54Z) - Adaptive Differential Privacy in Federated Learning: A Priority-Based
Approach [0.0]
Federated learning (FL) develops global models without direct access to local datasets.
DP offers a framework that gives a privacy guarantee by adding certain amounts of noise to parameters.
We propose adaptive noise addition in FL which decides the value of injected noise based on features' relative importance.
arXiv Detail & Related papers (2024-01-04T03:01:15Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - Amplitude-Varying Perturbation for Balancing Privacy and Utility in
Federated Learning [86.08285033925597]
This paper presents a new DP perturbation mechanism with a time-varying noise amplitude to protect the privacy of federated learning.
We derive an online refinement of the series to prevent FL from premature convergence resulting from excessive perturbation noise.
The contribution of the new DP mechanism to the convergence and accuracy of privacy-preserving FL is corroborated, compared to the state-of-the-art Gaussian noise mechanism with a persistent noise amplitude.
arXiv Detail & Related papers (2023-03-07T22:52:40Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - DP-UTIL: Comprehensive Utility Analysis of Differential Privacy in
Machine Learning [3.822543555265593]
Differential Privacy (DP) has emerged as a rigorous formalism to reason about privacy leakage.
In machine learning (ML), DP has been employed to limit/disclosure of training examples.
For deep neural networks, gradient perturbation results in lowest privacy leakage.
arXiv Detail & Related papers (2021-12-24T08:40:28Z) - Accuracy, Interpretability, and Differential Privacy via Explainable
Boosting [22.30100748652558]
We show that adding differential privacy to Explainable Boosting Machines (EBMs) yields state-of-the-art accuracy while protecting privacy.
Our experiments on multiple classification and regression datasets show that DP-EBM models suffer surprisingly little accuracy loss even with strong differential privacy guarantees.
arXiv Detail & Related papers (2021-06-17T17:33:00Z) - RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network [75.81653258081435]
Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
However, when GANs are applied on sensitive or private training examples, such as medical or financial records, it is still probable to divulge individuals' sensitive and private information.
We propose a R'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random noises on the value of the loss function during training.
arXiv Detail & Related papers (2020-07-04T09:51:02Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.