System Component-Level Self-Adaptations for Security via Bayesian Games
- URL: http://arxiv.org/abs/2103.08673v1
- Date: Fri, 12 Mar 2021 16:20:59 GMT
- Title: System Component-Level Self-Adaptations for Security via Bayesian Games
- Authors: Mingyue Zhang
- Abstract summary: Security attacks present unique challenges to self-adaptive system design.
We propose a new self-adaptive framework incorporating Bayesian game and model the defender (i.e., the system) at the granularity of components in system architecture.
- Score: 0.676855875213031
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Security attacks present unique challenges to self-adaptive system design due
to the adversarial nature of the environment. However, modeling the system as a
single player, as done in prior works in security domain, is insufficient for
the system under partial compromise and for the design of fine-grained
defensive strategies where the rest of the system with autonomy can cooperate
to mitigate the impact of attacks. To deal with such issues, we propose a new
self-adaptive framework incorporating Bayesian game and model the defender
(i.e., the system) at the granularity of components in system architecture. The
system architecture model is translated into a Bayesian multi-player game,
where each component is modeled as an independent player while security attacks
are encoded as variant types for the components. The defensive strategy for the
system is dynamically computed by solving the pure equilibrium to achieve the
best possible system utility, improving the resiliency of the system against
security attacks.
Related papers
- CyGATE: Game-Theoretic Cyber Attack-Defense Engine for Patch Strategy Optimization [73.13843039509386]
This paper presents CyGATE, a game-theoretic framework modeling attacker-defender interactions.<n>CyGATE frames cyber conflicts as a partially observable game (POSG) across Cyber Kill Chain stages.<n>The framework's flexible architecture enables extension to multi-agent scenarios.
arXiv Detail & Related papers (2025-08-01T09:53:06Z) - PICO: Secure Transformers via Robust Prompt Isolation and Cybersecurity Oversight [0.0]
We propose a robust transformer architecture designed to prevent prompt injection attacks.
Our PICO framework structurally separates trusted system instructions from untrusted user inputs.
We incorporate a specialized Security Expert Agent within a Mixture-of-Experts framework.
arXiv Detail & Related papers (2025-04-26T00:46:13Z) - Incorporating System-level Safety Requirements in Perception Models via Reinforcement Learning [7.833541053347799]
We propose a training paradigm that augments the perception component with an understanding of system-level safety objectives.
We show that models trained with this approach outperform baseline perception models in terms of system-level safety.
arXiv Detail & Related papers (2024-12-04T01:40:54Z) - Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics [70.93622520400385]
This paper systematically quantifies the robustness of VLA-based robotic systems.
We introduce an untargeted position-aware attack objective that leverages spatial foundations to destabilize robotic actions.
We also design an adversarial patch generation approach that places a small, colorful patch within the camera's view, effectively executing the attack in both digital and physical environments.
arXiv Detail & Related papers (2024-11-18T01:52:20Z) - Strategic Deployment of Honeypots in Blockchain-based IoT Systems [1.3654846342364306]
It introduces an AI-powered system model for the dynamic deployment of honeypots, utilizing an Intrusion Detection System (IDS) integrated with smart contract functionalities on IoT nodes.
The model enables the transformation of regular nodes into decoys in response to suspicious activities, thereby strengthening the security of BIoT networks.
arXiv Detail & Related papers (2024-05-21T17:27:00Z) - Towards Model Co-evolution Across Self-Adaptation Steps for Combined
Safety and Security Analysis [44.339753503750735]
We present several models that describe different aspects of a self-adaptive system.
We outline our idea of how these models can then be combined into an Attack-Fault Tree.
arXiv Detail & Related papers (2023-09-18T10:35:40Z) - Adopting the Actor Model for Antifragile Serverless Architectures [2.602613712854636]
Antifragility is a concept focusing on letting software systems learn and improve over time based on sustained adverse events such as failures.
We propose a new idea for supporting the adoption of supervision strategies in serverless systems to improve the antifragility properties of such systems.
arXiv Detail & Related papers (2023-06-26T14:49:10Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - GRAVITAS: Graphical Reticulated Attack Vectors for Internet-of-Things
Aggregate Security [5.918387680589584]
Internet-of-Things (IoT) and cyber-physical systems (CPSs) may consist of thousands of devices connected in a complex network topology.
We describe a comprehensive risk management system, called GRAVITAS, for IoT/CPS that can identify undiscovered attack vectors.
arXiv Detail & Related papers (2021-05-31T19:35:23Z) - Constraints Satisfiability Driven Reinforcement Learning for Autonomous
Cyber Defense [7.321728608775741]
We present a new hybrid autonomous agent architecture that aims to optimize and verify defense policies of reinforcement learning (RL)
We use constraints verification (using satisfiability modulo theory (SMT)) to steer the RL decision-making toward safe and effective actions.
Our evaluation of the presented approach in a simulated CPS environment shows that the agent learns the optimal policy fast and defeats diversified attack strategies in 99% cases.
arXiv Detail & Related papers (2021-04-19T01:08:30Z) - Towards an Interface Description Template for AI-enabled Systems [77.34726150561087]
Reuse is a common system architecture approach that seeks to instantiate a system architecture with existing components.
There is currently no framework that guides the selection of necessary information to assess their portability to operate in a system different than the one for which the component was originally purposed.
We present ongoing work on establishing an interface description template that captures the main information of an AI-enabled component.
arXiv Detail & Related papers (2020-07-13T20:30:26Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.