Game-Theoretic and Machine Learning-based Approaches for Defensive
Deception: A Survey
- URL: http://arxiv.org/abs/2101.10121v1
- Date: Thu, 21 Jan 2021 21:55:43 GMT
- Title: Game-Theoretic and Machine Learning-based Approaches for Defensive
Deception: A Survey
- Authors: Mu Zhu, Ahmed H. Anwar, Zelin Wan, Jin-Hee Cho, Charles Kamhoua, and
Munindar P. Singh
- Abstract summary: This paper focuses on defensive deception research centered on game theory and machine learning.
It closes with an outline of some research directions to tackle major gaps in current defensive deception research.
- Score: 13.624968742674143
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Defensive deception is a promising approach for cyberdefense. Although
defensive deception is increasingly popular in the research community, there
has not been a systematic investigation of its key components, the underlying
principles, and its tradeoffs in various problem settings. This survey paper
focuses on defensive deception research centered on game theory and machine
learning, since these are prominent families of artificial intelligence
approaches that are widely employed in defensive deception. This paper brings
forth insights, lessons, and limitations from prior work. It closes with an
outline of some research directions to tackle major gaps in current defensive
deception research.
Related papers
- A Critical Evaluation of Defenses against Prompt Injection Attacks [95.81023801370073]
Large Language Models (LLMs) are vulnerable to prompt injection attacks.<n>Several defenses have recently been proposed, often claiming to mitigate these attacks successfully.<n>We argue that existing studies lack a principled approach to evaluating these defenses.
arXiv Detail & Related papers (2025-05-23T19:39:56Z) - Decoding FL Defenses: Systemization, Pitfalls, and Remedies [16.907513505608666]
There are no guidelines for evaluating Federated Learning (FL) defenses.
We design a comprehensive systemization of FL defenses along three dimensions.
We survey 50 top-tier defense papers and identify the commonly used components in their evaluation setups.
arXiv Detail & Related papers (2025-02-03T23:14:02Z) - Deep Learning Model Inversion Attacks and Defenses: A Comprehensive Survey [18.304096609558925]
Model inversion (MI) attacks pose a significant threat to the privacy and integrity of personal data.
This survey aims to fill the gap in the literature by providing a structured and in-depth review of MI attacks and defense strategies.
In conjunction with this survey, we have developed a comprehensive repository to support research on MI attacks and defenses.
arXiv Detail & Related papers (2025-01-31T07:32:12Z) - Model Inversion Attacks: A Survey of Approaches and Countermeasures [59.986922963781]
Recently, a new type of privacy attack, the model inversion attacks (MIAs), aims to extract sensitive features of private data for training.
Despite the significance, there is a lack of systematic studies that provide a comprehensive overview and deeper insights into MIAs.
This survey aims to summarize up-to-date MIA methods in both attacks and defenses.
arXiv Detail & Related papers (2024-11-15T08:09:28Z) - Privacy Leakage on DNNs: A Survey of Model Inversion Attacks and Defenses [40.77270226912783]
Model Inversion (MI) attacks disclose private information about the training dataset by abusing access to the trained models.
Despite the rapid advances in the field, we lack a comprehensive and systematic overview of existing MI attacks and defenses.
We elaborately analyze and compare numerous recent attacks and defenses on Deep Neural Networks (DNNs) across multiple modalities and learning tasks.
arXiv Detail & Related papers (2024-02-06T14:06:23Z) - Hindering Adversarial Attacks with Multiple Encrypted Patch Embeddings [13.604830818397629]
We propose a new key-based defense focusing on both efficiency and robustness.
We build upon the previous defense with two major improvements: (1) efficient training and (2) optional randomization.
Experiments were carried out on the ImageNet dataset, and the proposed defense was evaluated against an arsenal of state-of-the-art attacks.
arXiv Detail & Related papers (2023-09-04T14:08:34Z) - Adversarial Attacks and Defenses on 3D Point Cloud Classification: A
Survey [28.21038594191455]
Despite remarkable achievements, deep learning algorithms are vulnerable to adversarial attacks.
This paper first introduces the principles and characteristics of adversarial attacks and summarizes and analyzes adversarial example generation methods.
It also provides an overview of defense strategies, organized into data-focused and model-focused methods.
arXiv Detail & Related papers (2023-07-01T11:46:36Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - A Review of Adversarial Attack and Defense for Classification Methods [78.50824774203495]
This paper focuses on the generation and guarding of adversarial examples.
It is the hope of the authors that this paper will encourage more statisticians to work on this important and exciting field of generating and defending against adversarial examples.
arXiv Detail & Related papers (2021-11-18T22:13:43Z) - Threat of Adversarial Attacks on Deep Learning in Computer Vision:
Survey II [86.51135909513047]
Deep Learning is vulnerable to adversarial attacks that can manipulate its predictions.
This article reviews the contributions made by the computer vision community in adversarial attacks on deep learning.
It provides definitions of technical terminologies for non-experts in this domain.
arXiv Detail & Related papers (2021-08-01T08:54:47Z) - Adversarial Machine Learning in Image Classification: A Survey Towards
the Defender's Perspective [1.933681537640272]
Adversarial examples are images containing subtle perturbations generated by malicious optimization algorithms.
Deep Learning algorithms have been used in security-critical applications, such as biometric recognition systems and self-driving cars.
arXiv Detail & Related papers (2020-09-08T13:21:55Z) - Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
Review [40.36824357892676]
This work provides the community with a timely comprehensive review of backdoor attacks and countermeasures on deep learning.
According to the attacker's capability and affected stage of the machine learning pipeline, the attack surfaces are recognized to be wide.
Countermeasures are categorized into four general classes: blind backdoor removal, offline backdoor inspection, online backdoor inspection, and post backdoor removal.
arXiv Detail & Related papers (2020-07-21T12:49:12Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.