Game-Theoretic Cybersecurity: the Good, the Bad and the Ugly
- URL: http://arxiv.org/abs/2401.13815v2
- Date: Thu, 13 Feb 2025 19:07:28 GMT
- Title: Game-Theoretic Cybersecurity: the Good, the Bad and the Ugly
- Authors: Brandon Collins, Shouhuai Xu, Philip N. Brown,
- Abstract summary: We develop a framework to characterize the function and assumptions of existing works as applied to cybersecurity.
We leverage this information to analyze the capabilities of the proposed models in comparison to the application-specific needs they are meant to serve.
Our main finding is that Game Theory largely fails to incorporate notions of uncertainty critical to the application being considered.
- Score: 5.715413347864052
- License:
- Abstract: Given the scale of consequences attributable to cyber attacks, the field of cybersecurity has long outgrown ad-hoc decision-making. A popular choice to provide disciplined decision-making in cybersecurity is Game Theory, which seeks to mathematically understand strategic interaction. In practice though, game-theoretic approaches are scarcely utilized (to our knowledge), highlighting the need to understand the deficit between the existing state-of-the-art and the needs of cybersecurity practitioners. Therefore, we develop a framework to characterize the function and assumptions of existing works as applied to cybersecurity and leverage it to characterize 80 unique technical papers. Then, we leverage this information to analyze the capabilities of the proposed models in comparison to the application-specific needs they are meant to serve, as well as the practicality of implementing the proposed solution. Our main finding is that Game Theory largely fails to incorporate notions of uncertainty critical to the application being considered. To remedy this, we provide guidance in terms of how to incorporate uncertainty in a model, what forms of uncertainty are critical to consider in each application area, and how to model the information that is available in each application area.
Related papers
- Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice [186.055899073629]
Unlearning is often invoked as a solution for removing the effects of targeted information from a generative-AI model.
Unlearning is also proposed as a way to prevent a model from generating targeted types of information in its outputs.
Both of these goals--the targeted removal of information from a model and the targeted suppression of information from a model's outputs--present various technical and substantive challenges.
arXiv Detail & Related papers (2024-12-09T20:18:43Z) - SoK: Unifying Cybersecurity and Cybersafety of Multimodal Foundation Models with an Information Theory Approach [58.93030774141753]
Multimodal foundation models (MFMs) represent a significant advancement in artificial intelligence.
This paper conceptualizes cybersafety and cybersecurity in the context of multimodal learning.
We present a comprehensive Systematization of Knowledge (SoK) to unify these concepts in MFMs, identifying key threats.
arXiv Detail & Related papers (2024-11-17T23:06:20Z) - Inherently Interpretable and Uncertainty-Aware Models for Online Learning in Cyber-Security Problems [0.22499166814992438]
We propose a novel pipeline for online supervised learning problems in cyber-security.
Our approach aims to balance predictive performance with transparency.
This work contributes to the growing field of interpretable AI.
arXiv Detail & Related papers (2024-11-14T12:11:08Z) - Rethinking the Uncertainty: A Critical Review and Analysis in the Era of Large Language Models [42.563558441750224]
Large Language Models (LLMs) have become fundamental to a broad spectrum of artificial intelligence applications.
Current methods often struggle to accurately identify, measure, and address the true uncertainty.
This paper introduces a comprehensive framework specifically designed to identify and understand the types and sources of uncertainty.
arXiv Detail & Related papers (2024-10-26T15:07:15Z) - Risk-reducing design and operations toolkit: 90 strategies for managing
risk and uncertainty in decision problems [65.268245109828]
This paper develops a catalog of such strategies and develops a framework for them.
It argues that they provide an efficient response to decision problems that are seemingly intractable due to high uncertainty.
It then proposes a framework to incorporate them into decision theory using multi-objective optimization.
arXiv Detail & Related papers (2023-09-06T16:14:32Z) - A Survey of Network Requirements for Enabling Effective Cyber Deception [0.0]
This paper investigates the crucial network requirements essential for the successful implementation of effective cyber deception techniques.
With a focus on diverse network architectures and topologies, we delve into the intricate relationship between network characteristics and the deployment of deception mechanisms.
arXiv Detail & Related papers (2023-09-01T00:38:57Z) - Against Algorithmic Exploitation of Human Vulnerabilities [2.6918074738262194]
We are concerned with the problem of machine learning models inadvertently modelling vulnerabilities.
We describe common vulnerabilities, and illustrate cases where they are likely to play a role in algorithmic decision-making.
We propose a set of requirements for methods to detect the potential for vulnerability modelling.
arXiv Detail & Related papers (2023-01-12T13:15:24Z) - AdvCat: Domain-Agnostic Robustness Assessment for Cybersecurity-Critical
Applications with Categorical Inputs [29.907921481157974]
robustness against adversarial attacks is one of the key trust concerns for Machine Learning deployment.
We propose a provably optimal yet highly efficient adversarial robustness assessment protocol for a wide band of ML-driven cybersecurity-critical applications.
We demonstrate the use of the domain-agnostic robustness assessment method with substantial experimental study on fake news detection and intrusion detection problems.
arXiv Detail & Related papers (2022-12-13T18:12:02Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - A general framework for defining and optimizing robustness [74.67016173858497]
We propose a rigorous and flexible framework for defining different types of robustness properties for classifiers.
Our concept is based on postulates that robustness of a classifier should be considered as a property that is independent of accuracy.
We develop a very general robustness framework that is applicable to any type of classification model.
arXiv Detail & Related papers (2020-06-19T13:24:20Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.