Towards Industrial Private AI: A two-tier framework for data and model
security
- URL: http://arxiv.org/abs/2107.12806v1
- Date: Tue, 27 Jul 2021 13:28:07 GMT
- Title: Towards Industrial Private AI: A two-tier framework for data and model
security
- Authors: Sunder Ali Khowaja, Kapal Dev, Nawab Muhammad Faseeh Qureshi, Parus
Khuwaja, Luca Foschini
- Abstract summary: We propose a private (FLEP) AI framework that provides two-tier security for data and model parameters in an IIoT environment.
Experimental results show that the proposed method achieves better encryption quality at the expense of slightly increased execution time.
- Score: 7.773212143837498
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the advances in 5G and IoT devices, the industries are vastly adopting
artificial intelligence (AI) techniques for improving classification and
prediction-based services. However, the use of AI also raises concerns
regarding data privacy and security that can be misused or leaked. Private AI
was recently coined to address the data security issue by combining AI with
encryption techniques but existing studies have shown that model inversion
attacks can be used to reverse engineer the images from model parameters. In
this regard, we propose a federated learning and encryption-based private
(FLEP) AI framework that provides two-tier security for data and model
parameters in an IIoT environment. We proposed a three-layer encryption method
for data security and provided a hypothetical method to secure the model
parameters. Experimental results show that the proposed method achieves better
encryption quality at the expense of slightly increased execution time. We also
highlighted several open issues and challenges regarding the FLEP AI
framework's realization.
Related papers
- Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.
We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.
We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - Technical Report for the Forgotten-by-Design Project: Targeted Obfuscation for Machine Learning [0.03749861135832072]
This paper explores the concept of the Right to be Forgotten (RTBF) within AI systems, contrasting it with traditional data erasure methods.
We introduce Forgotten by Design, a proactive approach to privacy preservation that integrates instance-specific obfuscation techniques.
Our experiments on the CIFAR-10 dataset demonstrate that our techniques reduce privacy risks by at least an order of magnitude while maintaining model accuracy.
arXiv Detail & Related papers (2025-01-20T15:07:59Z) - A Survey on Private Transformer Inference [17.38462391595219]
Transformer models have revolutionized AI, enabling applications like content generation and sentiment analysis.
However, their use in Machine Learning as a Service (ML) raises significant privacy concerns.
Private Transformer Inference (PTI) addresses these issues using cryptographic techniques.
arXiv Detail & Related papers (2024-12-11T07:05:24Z) - Generative AI for Secure and Privacy-Preserving Mobile Crowdsensing [74.58071278710896]
generative AI has attracted much attention from both academic and industrial fields.
Secure and privacy-preserving mobile crowdsensing (SPPMCS) has been widely applied in data collection/ acquirement.
arXiv Detail & Related papers (2024-05-17T04:00:58Z) - Auditable Homomorphic-based Decentralized Collaborative AI with
Attribute-based Differential Privacy [4.555256739812733]
We propose a novel decentralized collaborative AI framework, named Auditable Homomorphic-based Decentralised Collaborative AI (AerisAI)
Our proposed AerisAI directly aggregates the encrypted parameters with a blockchain-based smart contract to get rid of the need of a trusted third party.
We also propose a brand-new concept for eliminating the negative impacts of differential privacy for model performance.
arXiv Detail & Related papers (2024-02-28T14:51:18Z) - You Still See Me: How Data Protection Supports the Architecture of AI Surveillance [5.989015605760986]
We show how privacy-preserving techniques in the development of AI systems can support surveillance infrastructure under the guise of regulatory permissibility.
We propose technology and policy strategies to evaluate privacy-preserving techniques in light of the protections they actually confer.
arXiv Detail & Related papers (2024-02-09T18:39:29Z) - Reconciling AI Performance and Data Reconstruction Resilience for
Medical Imaging [52.578054703818125]
Artificial Intelligence (AI) models are vulnerable to information leakage of their training data, which can be highly sensitive.
Differential Privacy (DP) aims to circumvent these susceptibilities by setting a quantifiable privacy budget.
We show that using very large privacy budgets can render reconstruction attacks impossible, while drops in performance are negligible.
arXiv Detail & Related papers (2023-12-05T12:21:30Z) - Robust Representation Learning for Privacy-Preserving Machine Learning:
A Multi-Objective Autoencoder Approach [0.9831489366502302]
We propose a robust representation learning framework for privacy-preserving machine learning (ppML)
Our method centers on training autoencoders in a multi-objective manner and then concatenating the latent and learned features from the encoding part as the encoded form of our data.
With our proposed framework, we can share our data and use third party tools without being under the threat of revealing its original form.
arXiv Detail & Related papers (2023-09-08T16:41:25Z) - Robust Semi-supervised Federated Learning for Images Automatic
Recognition in Internet of Drones [57.468730437381076]
We present a Semi-supervised Federated Learning (SSFL) framework for privacy-preserving UAV image recognition.
There are significant differences in the number, features, and distribution of local data collected by UAVs using different camera modules.
We propose an aggregation rule based on the frequency of the client's participation in training, namely the FedFreq aggregation rule.
arXiv Detail & Related papers (2022-01-03T16:49:33Z) - Data-Driven and SE-assisted AI Model Signal-Awareness Enhancement and
Introspection [61.571331422347875]
We propose a data-driven approach to enhance models' signal-awareness.
We combine the SE concept of code complexity with the AI technique of curriculum learning.
We achieve up to 4.8x improvement in model signal awareness.
arXiv Detail & Related papers (2021-11-10T17:58:18Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.