Generative Adversarial Networks: A Survey Towards Private and Secure
Applications
- URL: http://arxiv.org/abs/2106.03785v1
- Date: Mon, 7 Jun 2021 16:47:13 GMT
- Title: Generative Adversarial Networks: A Survey Towards Private and Secure
Applications
- Authors: Zhipeng Cai, Zuobin Xiong, Honghui Xu, Peng Wang, Wei Li, Yi Pan
- Abstract summary: Generative Adversarial Networks (GAN) have promoted a variety of applications in computer vision, natural language processing, etc.
GAN not only provides impressive performance on data generation-based tasks but also stimulates fertilization for privacy and security oriented research.
- Score: 11.810895820428515
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Adversarial Networks (GAN) have promoted a variety of applications
in computer vision, natural language processing, etc. due to its generative
model's compelling ability to generate realistic examples plausibly drawn from
an existing distribution of samples. GAN not only provides impressive
performance on data generation-based tasks but also stimulates fertilization
for privacy and security oriented research because of its game theoretic
optimization strategy. Unfortunately, there are no comprehensive surveys on GAN
in privacy and security, which motivates this survey paper to summarize those
state-of-the-art works systematically. The existing works are classified into
proper categories based on privacy and security functions, and this survey
paper conducts a comprehensive analysis of their advantages and drawbacks.
Considering that GAN in privacy and security is still at a very initial stage
and has imposed unique challenges that are yet to be well addressed, this paper
also sheds light on some potential privacy and security applications with GAN
and elaborates on some future research directions.
Related papers
- Trustworthiness in Retrieval-Augmented Generation Systems: A Survey [59.26328612791924]
Retrieval-Augmented Generation (RAG) has quickly grown into a pivotal paradigm in the development of Large Language Models (LLMs)
We propose a unified framework that assesses the trustworthiness of RAG systems across six key dimensions: factuality, robustness, fairness, transparency, accountability, and privacy.
arXiv Detail & Related papers (2024-09-16T09:06:44Z) - Preserving Privacy in Large Language Models: A Survey on Current Threats and Solutions [12.451936012379319]
Large Language Models (LLMs) represent a significant advancement in artificial intelligence, finding applications across various domains.
Their reliance on massive internet-sourced datasets for training brings notable privacy issues.
Certain application-specific scenarios may require fine-tuning these models on private data.
arXiv Detail & Related papers (2024-08-10T05:41:19Z) - A Survey on the Application of Generative Adversarial Networks in Cybersecurity: Prospective, Direction and Open Research Scopes [1.3631461603291568]
Generative Adversarial Networks (GANs) have emerged as powerful solutions for addressing the constantly changing security issues.
This survey studies the significance of the deep learning model, precisely on GANs, in strengthening cybersecurity defenses.
The focus is to examine how GANs can be influential tools to strengthen cybersecurity defenses in these domains.
arXiv Detail & Related papers (2024-07-11T19:51:48Z) - Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - Generative AI for Secure and Privacy-Preserving Mobile Crowdsensing [74.58071278710896]
generative AI has attracted much attention from both academic and industrial fields.
Secure and privacy-preserving mobile crowdsensing (SPPMCS) has been widely applied in data collection/ acquirement.
arXiv Detail & Related papers (2024-05-17T04:00:58Z) - A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures [50.987594546912725]
Despite a growing corpus of research in AI privacy and explainability, there is little attention on privacy-preserving model explanations.
This article presents the first thorough survey about privacy attacks on model explanations and their countermeasures.
arXiv Detail & Related papers (2024-03-31T12:44:48Z) - SoK: Can Trajectory Generation Combine Privacy and Utility? [26.886689231025525]
This paper proposes a framework for designing a privacy-preserving trajectory publication approach.
We focus on the systematisation of the state-of-the-art generative models for trajectories in the context of the proposed framework.
arXiv Detail & Related papers (2024-03-12T00:25:14Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Security and Privacy on Generative Data in AIGC: A Survey [17.456578314457612]
We review the security and privacy on generative data in AIGC.
We reveal the successful experiences of state-of-the-art countermeasures in terms of the foundational properties of privacy, controllability, authenticity, and compliance.
arXiv Detail & Related papers (2023-09-18T02:35:24Z) - A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
Applications [76.88662943995641]
Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data.
To address this issue, researchers have started to develop privacy-preserving GNNs.
Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain.
arXiv Detail & Related papers (2023-08-31T00:31:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.