Out-of-Distribution Detection for Neurosymbolic Autonomous Cyber Agents
- URL: http://arxiv.org/abs/2412.02875v1
- Date: Tue, 03 Dec 2024 22:20:52 GMT
- Title: Out-of-Distribution Detection for Neurosymbolic Autonomous Cyber Agents
- Authors: Ankita Samaddar, Nicholas Potteiger, Xenofon Koutsoukos,
- Abstract summary: We develop an out-of-distribution (OOD) Monitoring algorithm that uses a Probabilistic Neural Network (PNN) to detect anomalous situations.
We integrate the OOD monitoring algorithm with a neurosymbolic autonomous cyber agent that uses behavior trees with learning-enabled components.
- Score: 0.0
- License:
- Abstract: Autonomous agents for cyber applications take advantage of modern defense techniques by adopting intelligent agents with conventional and learning-enabled components. These intelligent agents are trained via reinforcement learning (RL) algorithms, and can learn, adapt to, reason about and deploy security rules to defend networked computer systems while maintaining critical operational workflows. However, the knowledge available during training about the state of the operational network and its environment may be limited. The agents should be trustworthy so that they can reliably detect situations they cannot handle, and hand them over to cyber experts. In this work, we develop an out-of-distribution (OOD) Monitoring algorithm that uses a Probabilistic Neural Network (PNN) to detect anomalous or OOD situations of RL-based agents with discrete states and discrete actions. To demonstrate the effectiveness of the proposed approach, we integrate the OOD monitoring algorithm with a neurosymbolic autonomous cyber agent that uses behavior trees with learning-enabled components. We evaluate the proposed approach in a simulated cyber environment under different adversarial strategies. Experimental results over a large number of episodes illustrate the overall efficiency of our proposed approach.
Related papers
- Designing Robust Cyber-Defense Agents with Evolving Behavior Trees [0.0]
We develop an approach to design autonomous cyber defense agents using behavior trees with learning-enabled components.
Learning-enabled components are optimized for adapting to various cyber-attacks and deploying security mechanisms.
Our results demonstrate that the EBT-based agent is robust to adaptive cyber-attacks and provides high-level explanations for interpreting its decisions and actions.
arXiv Detail & Related papers (2024-10-21T18:00:38Z) - Building Hybrid B-Spline And Neural Network Operators [0.0]
Control systems are indispensable for ensuring the safety of cyber-physical systems (CPS)
We propose a novel strategy that combines the inductive bias of B-splines with data-driven neural networks to facilitate real-time predictions of CPS behavior.
arXiv Detail & Related papers (2024-06-06T21:54:59Z) - Learning Cyber Defence Tactics from Scratch with Multi-Agent
Reinforcement Learning [4.796742432333795]
Team of intelligent agents in computer network defence roles may reveal promising avenues to safeguard cyber and kinetic assets.
Agents are evaluated on their ability to jointly mitigate attacker activity in host-based defence scenarios.
arXiv Detail & Related papers (2023-08-25T14:07:50Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - Deep hierarchical reinforcement agents for automated penetration testing [0.0]
This paper presents a novel deep reinforcement learning architecture with hierarchically structured agents called HA-DRL.
The proposed architecture is shown to find the optimal attacking policy faster and more stably than a conventional deep Q-learning agent.
arXiv Detail & Related papers (2021-09-14T05:28:22Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - What is Going on Inside Recurrent Meta Reinforcement Learning Agents? [63.58053355357644]
Recurrent meta reinforcement learning (meta-RL) agents are agents that employ a recurrent neural network (RNN) for the purpose of "learning a learning algorithm"
We shed light on the internal working mechanisms of these agents by reformulating the meta-RL problem using the Partially Observable Markov Decision Process (POMDP) framework.
arXiv Detail & Related papers (2021-04-29T20:34:39Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Detection of Insider Attacks in Distributed Projected Subgradient
Algorithms [11.096339082411882]
We show that a general neural network is particularly suitable for detecting and localizing malicious agents.
We propose to adopt one of the state-of-art approaches in federated learning, i.e., a collaborative peer-to-peer machine learning protocol.
In our simulations, a least-squared problem is considered to verify the feasibility and effectiveness of AI-based methods.
arXiv Detail & Related papers (2021-01-18T08:01:06Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.