Generative Adversarial Networks: A Survey Towards Private and Secure
Applications
- URL: http://arxiv.org/abs/2106.03785v1
- Date: Mon, 7 Jun 2021 16:47:13 GMT
- Title: Generative Adversarial Networks: A Survey Towards Private and Secure
Applications
- Authors: Zhipeng Cai, Zuobin Xiong, Honghui Xu, Peng Wang, Wei Li, Yi Pan
- Abstract summary: Generative Adversarial Networks (GAN) have promoted a variety of applications in computer vision, natural language processing, etc.
GAN not only provides impressive performance on data generation-based tasks but also stimulates fertilization for privacy and security oriented research.
- Score: 11.810895820428515
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Adversarial Networks (GAN) have promoted a variety of applications
in computer vision, natural language processing, etc. due to its generative
model's compelling ability to generate realistic examples plausibly drawn from
an existing distribution of samples. GAN not only provides impressive
performance on data generation-based tasks but also stimulates fertilization
for privacy and security oriented research because of its game theoretic
optimization strategy. Unfortunately, there are no comprehensive surveys on GAN
in privacy and security, which motivates this survey paper to summarize those
state-of-the-art works systematically. The existing works are classified into
proper categories based on privacy and security functions, and this survey
paper conducts a comprehensive analysis of their advantages and drawbacks.
Considering that GAN in privacy and security is still at a very initial stage
and has imposed unique challenges that are yet to be well addressed, this paper
also sheds light on some potential privacy and security applications with GAN
and elaborates on some future research directions.
Related papers
- Privacy-Preserving Generative Models: A Comprehensive Survey [0.5437298646956505]
Despite generative models' success, the need to study its implications for privacy and utility becomes more urgent.
No existing survey has systematically categorized the privacy and utility perspectives of GANs and VAEs.
arXiv Detail & Related papers (2025-02-05T23:24:43Z) - The Good, the Bad, and the (Un)Usable: A Rapid Literature Review on Privacy as Code [4.479352653343731]
Privacy and security are central to the design of information systems endowed with sound data protection and cyber resilience capabilities.
Developers often struggle to incorporate these properties into software projects as they either lack proper cybersecurity training or do not consider them a priority.
arXiv Detail & Related papers (2024-12-21T15:30:17Z) - Privacy in Fine-tuning Large Language Models: Attacks, Defenses, and Future Directions [11.338466798715906]
Fine-tuning Large Language Models (LLMs) can achieve state-of-the-art performance across various domains.
This paper provides a comprehensive survey of privacy challenges associated with fine-tuning LLMs.
We highlight vulnerabilities to various privacy attacks, including membership inference, data extraction, and backdoor attacks.
arXiv Detail & Related papers (2024-12-21T06:41:29Z) - Privacy-Preserving Large Language Models: Mechanisms, Applications, and Future Directions [0.0]
This survey explores the landscape of privacy-preserving mechanisms tailored for large language models.
We examine their efficacy in addressing key privacy challenges, such as membership inference and model inversion attacks.
By synthesizing state-of-the-art approaches and future trends, this paper provides a foundation for developing robust, privacy-preserving large language models.
arXiv Detail & Related papers (2024-12-09T00:24:09Z) - Model Inversion Attacks: A Survey of Approaches and Countermeasures [59.986922963781]
Recently, a new type of privacy attack, the model inversion attacks (MIAs), aims to extract sensitive features of private data for training.
Despite the significance, there is a lack of systematic studies that provide a comprehensive overview and deeper insights into MIAs.
This survey aims to summarize up-to-date MIA methods in both attacks and defenses.
arXiv Detail & Related papers (2024-11-15T08:09:28Z) - New Emerged Security and Privacy of Pre-trained Model: a Survey and Outlook [54.24701201956833]
Security and privacy issues have undermined users' confidence in pre-trained models.
Current literature lacks a clear taxonomy of emerging attacks and defenses for pre-trained models.
This taxonomy categorizes attacks and defenses into No-Change, Input-Change, and Model-Change approaches.
arXiv Detail & Related papers (2024-11-12T10:15:33Z) - Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - Generative AI for Secure and Privacy-Preserving Mobile Crowdsensing [74.58071278710896]
generative AI has attracted much attention from both academic and industrial fields.
Secure and privacy-preserving mobile crowdsensing (SPPMCS) has been widely applied in data collection/ acquirement.
arXiv Detail & Related papers (2024-05-17T04:00:58Z) - A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures [50.987594546912725]
Despite a growing corpus of research in AI privacy and explainability, there is little attention on privacy-preserving model explanations.
This article presents the first thorough survey about privacy attacks on model explanations and their countermeasures.
arXiv Detail & Related papers (2024-03-31T12:44:48Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Security and Privacy on Generative Data in AIGC: A Survey [17.456578314457612]
We review the security and privacy on generative data in AIGC.
We reveal the successful experiences of state-of-the-art countermeasures in terms of the foundational properties of privacy, controllability, authenticity, and compliance.
arXiv Detail & Related papers (2023-09-18T02:35:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.