Informing Autonomous Deception Systems with Cyber Expert Performance
Data
- URL: http://arxiv.org/abs/2109.00066v1
- Date: Tue, 31 Aug 2021 20:28:09 GMT
- Title: Informing Autonomous Deception Systems with Cyber Expert Performance
Data
- Authors: Maxine Major, Brian Souza, Joseph DiVita, Kimberly Ferguson-Walter
- Abstract summary: This paper explores the potential to use Inverse Reinforcement Learning (IRL) to gain insight into attacker actions, utilities of those actions, and ultimately decision points which cyber deception could thwart.
The Tularosa study, as one example, provides experimental data of real-world techniques and tools commonly used by attackers, from which core data can be leveraged to inform an autonomous cyber defense system.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of artificial intelligence (AI) algorithms in practice
depends on the realism and correctness of the data, models, and feedback
(labels or rewards) provided to the algorithm. This paper discusses methods for
improving the realism and ecological validity of AI used for autonomous cyber
defense by exploring the potential to use Inverse Reinforcement Learning (IRL)
to gain insight into attacker actions, utilities of those actions, and
ultimately decision points which cyber deception could thwart. The Tularosa
study, as one example, provides experimental data of real-world techniques and
tools commonly used by attackers, from which core data vectors can be leveraged
to inform an autonomous cyber defense system.
Related papers
- Intelligent Attacks on Cyber-Physical Systems and Critical Infrastructures [0.0]
This chapter provides an overview of the evolving landscape of attacks in cyber-physical systems and critical infrastructures.
It highlights the possible use of Artificial Intelligence (AI) algorithms to develop intelligent cyberattacks.
arXiv Detail & Related papers (2025-01-22T09:54:58Z) - AttackER: Towards Enhancing Cyber-Attack Attribution with a Named Entity Recognition Dataset [1.9573380763700712]
We will provide the first dataset on cyber-attack attribution.
Ours offers a rich set of annotations with contextual details, including some that span phrases and sentences.
We conducted extensive experiments and applied NLP techniques to demonstrate the dataset's effectiveness for attack attribution.
arXiv Detail & Related papers (2024-08-09T16:10:35Z) - Artificial Intelligence as the New Hacker: Developing Agents for Offensive Security [0.0]
This paper explores the integration of Artificial Intelligence (AI) into offensive cybersecurity.
It develops an autonomous AI agent, ReaperAI, designed to simulate and execute cyberattacks.
ReaperAI demonstrates the potential to identify, exploit, and analyze security vulnerabilities autonomously.
arXiv Detail & Related papers (2024-05-09T18:15:12Z) - Privacy Risks in Reinforcement Learning for Household Robots [42.675213619562975]
Privacy emerges as a pivotal concern within the realm of embodied AI, as the robot accesses substantial personal information.
This paper proposes an attack on the training process of the value-based algorithm and the gradient-based algorithm, utilizing gradient inversion to reconstruct states, actions, and supervisory signals.
arXiv Detail & Related papers (2023-06-15T16:53:26Z) - Planning for Learning Object Properties [117.27898922118946]
We formalize the problem of automatically training a neural network to recognize object properties as a symbolic planning problem.
We use planning techniques to produce a strategy for automating the training dataset creation and the learning process.
We provide an experimental evaluation in both a simulated and a real environment.
arXiv Detail & Related papers (2023-01-15T09:37:55Z) - CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of
Adversarial Robustness of Vision Models [61.68061613161187]
This paper presents CARLA-GeAR, a tool for the automatic generation of synthetic datasets for evaluating the robustness of neural models against physical adversarial patches.
The tool is built on the CARLA simulator, using its Python API, and allows the generation of datasets for several vision tasks in the context of autonomous driving.
The paper presents an experimental study to evaluate the performance of some defense methods against such attacks, showing how the datasets generated with CARLA-GeAR might be used in future work as a benchmark for adversarial defense in the real world.
arXiv Detail & Related papers (2022-06-09T09:17:38Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.