Trustworthy Artificial Intelligence for Cyber Threat Analysis
- URL: http://arxiv.org/abs/2506.19052v1
- Date: Wed, 18 Jun 2025 15:44:31 GMT
- Title: Trustworthy Artificial Intelligence for Cyber Threat Analysis
- Authors: Shuangbao Paul Wang, Paul Mullin,
- Abstract summary: We developed a machine learning based cyber threat detection and assessment tool.<n>It uses two stage, unsupervised and supervised learning, analyzing method on log data recorded from a web server on AWS cloud.<n>Results show the algorithm has the ability to identify cyber threats with high confidence.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence brings innovations into the society. However, bias and unethical exist in many algorithms that make the applications less trustworthy. Threats hunting algorithms based on machine learning have shown great advantage over classical methods. Reinforcement learning models are getting more accurate for identifying not only signature-based but also behavior-based threats. Quantum mechanics brings a new dimension in improving classification speed with exponential advantage. In this research, we developed a machine learning based cyber threat detection and assessment tool. It uses two stage, unsupervised and supervised learning, analyzing method on log data recorded from a web server on AWS cloud. The results show the algorithm has the ability to identify cyber threats with high confidence.
Related papers
- Towards Explainable and Lightweight AI for Real-Time Cyber Threat Hunting in Edge Networks [0.0]
This study introduces an Explainable and Lightweight AI (ELAI) framework designed for real-time cyber threat detection in edge networks.<n>Our approach integrates interpretable machine learning algorithms with optimized lightweight deep learning techniques, ensuring both transparency and computational efficiency.<n>We evaluate ELAI using benchmark cybersecurity datasets, such as CICIDS and UNSW-NB15, assessing its performance across diverse cyberattack scenarios.
arXiv Detail & Related papers (2025-04-18T23:45:39Z) - Towards more Practical Threat Models in Artificial Intelligence Security [66.67624011455423]
Recent works have identified a gap between research and practice in artificial intelligence security.
We revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice.
arXiv Detail & Related papers (2023-11-16T16:09:44Z) - Backdoor Attack Detection in Computer Vision by Applying Matrix
Factorization on the Weights of Deep Networks [6.44397009982949]
We introduce a novel method for backdoor detection that extracts features from pre-trained DNN's weights.
In comparison to other detection techniques, this has a number of benefits, such as not requiring any training data.
Our method outperforms the competing algorithms in terms of efficiency and is more accurate, helping to ensure the safe application of deep learning and AI.
arXiv Detail & Related papers (2022-12-15T20:20:18Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - A Heterogeneous Graph Learning Model for Cyber-Attack Detection [4.559898668629277]
A cyber-attack is a malicious attempt by hackers to breach the target information system.
This paper proposes an intelligent cyber-attack detection method based on provenance data.
Experiment results show that the proposed method outperforms other learning based detection models.
arXiv Detail & Related papers (2021-12-16T16:03:39Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - Software Vulnerability Detection via Deep Learning over Disaggregated
Code Graph Representation [57.92972327649165]
This work explores a deep learning approach to automatically learn the insecure patterns from code corpora.
Because code naturally admits graph structures with parsing, we develop a novel graph neural network (GNN) to exploit both the semantic context and structural regularity of a program.
arXiv Detail & Related papers (2021-09-07T21:24:36Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z) - NAttack! Adversarial Attacks to bypass a GAN based classifier trained to
detect Network intrusion [0.3007949058551534]
Before the rise of machine learning, network anomalies which could imply an attack, were detected using well-crafted rules.
With the advancements of machine learning for network anomaly, it is not easy for a human to understand how to bypass a cyber-defence system.
In this paper, we show that even if we build a classifier and train it with adversarial examples for network data, we can use adversarial attacks and successfully break the system.
arXiv Detail & Related papers (2020-02-20T01:54:45Z) - Cyber Attack Detection thanks to Machine Learning Algorithms [0.0]
This paper explores Machine Learning as a viable solution by examining its capabilities to classify malicious traffic in a network.
Our approach analyzes five different machine learning algorithms against NetFlow dataset containing common botnets.
The Random Forest succeeds in detecting more than 95% of the botnets in 8 out of 13 scenarios and more than 55% in the most difficult datasets.
arXiv Detail & Related papers (2020-01-17T13:52:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.