Auditable Homomorphic-based Decentralized Collaborative AI with
Attribute-based Differential Privacy
- URL: http://arxiv.org/abs/2403.00023v1
- Date: Wed, 28 Feb 2024 14:51:18 GMT
- Title: Auditable Homomorphic-based Decentralized Collaborative AI with
Attribute-based Differential Privacy
- Authors: Lo-Yao Yeh, Sheng-Po Tseng, Chia-Hsun Lu, Chih-Ya Shen
- Abstract summary: We propose a novel decentralized collaborative AI framework, named Auditable Homomorphic-based Decentralised Collaborative AI (AerisAI)
Our proposed AerisAI directly aggregates the encrypted parameters with a blockchain-based smart contract to get rid of the need of a trusted third party.
We also propose a brand-new concept for eliminating the negative impacts of differential privacy for model performance.
- Score: 4.555256739812733
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, the notion of federated learning (FL) has led to the new
paradigm of distributed artificial intelligence (AI) with privacy preservation.
However, most current FL systems suffer from data privacy issues due to the
requirement of a trusted third party. Although some previous works introduce
differential privacy to protect the data, however, it may also significantly
deteriorate the model performance. To address these issues, we propose a novel
decentralized collaborative AI framework, named Auditable Homomorphic-based
Decentralised Collaborative AI (AerisAI), to improve security with homomorphic
encryption and fine-grained differential privacy. Our proposed AerisAI directly
aggregates the encrypted parameters with a blockchain-based smart contract to
get rid of the need of a trusted third party. We also propose a brand-new
concept for eliminating the negative impacts of differential privacy for model
performance. Moreover, the proposed AerisAI also provides the broadcast-aware
group key management based on ciphertext-policy attribute-based encryption
(CPABE) to achieve fine-grained access control based on different service-level
agreements. We provide a formal theoretical analysis of the proposed AerisAI as
well as the functionality comparison with the other baselines. We also conduct
extensive experiments on real datasets to evaluate the proposed approach. The
experimental results indicate that our proposed AerisAI significantly
outperforms the other state-of-the-art baselines.
Related papers
- SoK: Decentralized AI (DeAI) [4.651101982820699]
We present a Systematization of Knowledge (SoK) for blockchain-based DeAI solutions.
We propose a taxonomy to classify existing DeAI protocols based on the model lifecycle.
We investigate how blockchain features contribute to enhancing the security, transparency, and trustworthiness of AI processes.
arXiv Detail & Related papers (2024-11-26T14:28:25Z) - Generative AI for Secure and Privacy-Preserving Mobile Crowdsensing [74.58071278710896]
generative AI has attracted much attention from both academic and industrial fields.
Secure and privacy-preserving mobile crowdsensing (SPPMCS) has been widely applied in data collection/ acquirement.
arXiv Detail & Related papers (2024-05-17T04:00:58Z) - VFLGAN: Vertical Federated Learning-based Generative Adversarial Network for Vertically Partitioned Data Publication [16.055684281505474]
This article proposes a Vertical Federated Learning-based Generative Adrial Network, VFLGAN, for vertically partitioned data publication.
The quality of the synthetic dataset generated by VFLGAN is 3.2 times better than that generated by VertiGAN.
We also propose a practical auditing scheme that applies membership inference attacks to estimate privacy leakage through the synthetic dataset.
arXiv Detail & Related papers (2024-04-15T12:25:41Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Federated Learning-Empowered AI-Generated Content in Wireless Networks [58.48381827268331]
Federated learning (FL) can be leveraged to improve learning efficiency and achieve privacy protection for AIGC.
We present FL-based techniques for empowering AIGC, and aim to enable users to generate diverse, personalized, and high-quality content.
arXiv Detail & Related papers (2023-07-14T04:13:11Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Towards Industrial Private AI: A two-tier framework for data and model
security [7.773212143837498]
We propose a private (FLEP) AI framework that provides two-tier security for data and model parameters in an IIoT environment.
Experimental results show that the proposed method achieves better encryption quality at the expense of slightly increased execution time.
arXiv Detail & Related papers (2021-07-27T13:28:07Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Antipodes of Label Differential Privacy: PATE and ALIBI [2.2761657094500682]
We consider the privacy-preserving machine learning (ML) setting where the trained model must satisfy differential privacy (DP)
We propose two novel approaches based on, respectively, the Laplace mechanism and the PATE framework.
We show how to achieve very strong privacy levels in some regimes, with our adaptation of the PATE framework.
arXiv Detail & Related papers (2021-06-07T08:14:32Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Federated Model Distillation with Noise-Free Differential Privacy [35.72801867380072]
We propose a novel framework called FEDMD-NFDP, which applies a Noise-Free Differential Privacy (NFDP) mechanism into a federated model distillation framework.
Our extensive experimental results on various datasets validate that FEDMD-NFDP can deliver comparable utility and communication efficiency.
arXiv Detail & Related papers (2020-09-11T17:19:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.