Towards in-situ Psychological Profiling of Cybercriminals Using Dynamically Generated Deception Environments
- URL: http://arxiv.org/abs/2405.11497v1
- Date: Sun, 19 May 2024 09:48:59 GMT
- Title: Towards in-situ Psychological Profiling of Cybercriminals Using Dynamically Generated Deception Environments
- Authors: Jacob Quibell,
- Abstract summary: Cybercrime is estimated to cost the global economy almost $10 trillion annually.
Traditional perimeter security approach to cyber defence has so far proved inadequate to combat the growing threat of cybercrime.
Deceptive techniques aim to mislead attackers, diverting them from critical assets whilst simultaneously gathering cyber threat intelligence on the threat actor.
This article presents a proof-of-concept system that has been developed to capture the profile of an attacker in-situ, during a simulated cyber-attack in real time.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Cybercrime is estimated to cost the global economy almost \$10 trillion annually and with businesses and governments reporting an ever-increasing number of successful cyber-attacks there is a growing demand to rethink the strategy towards cyber security. The traditional, perimeter security approach to cyber defence has so far proved inadequate to combat the growing threat of cybercrime. Cyber deception offers a promising alternative by creating a dynamic defence environment. Deceptive techniques aim to mislead attackers, diverting them from critical assets whilst simultaneously gathering cyber threat intelligence on the threat actor. This article presents a proof-of-concept (POC) cyber deception system that has been developed to capture the profile of an attacker in-situ, during a simulated cyber-attack in real time. By dynamically and autonomously generating deception material based on the observed attacker behaviour and analysing how the attacker interacts with the deception material, the system outputs a prediction on the attacker's motive. The article also explores how this POC can be expanded to infer other features of the attacker's profile such as psychological characteristics. By dynamically and autonomously generating deception material based on observed attacker behaviour and analysing how the attacker interacts with the deception material, the system outputs a prediciton on the attacker's motive. The article also explores how this POC can be expanded to infer other features of the attacker's profile such as psychological characteristics.
Related papers
- Countering Autonomous Cyber Threats [40.00865970939829]
Foundation Models present dual-use concerns broadly and within the cyber domain specifically.
Recent research has shown the potential for these advanced models to inform or independently execute offensive cyberspace operations.
This work evaluates several state-of-the-art FMs on their ability to compromise machines in an isolated network and investigates defensive mechanisms to defeat such AI-powered attacks.
arXiv Detail & Related papers (2024-10-23T22:46:44Z) - Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks [0.0]
This paper delves into the escalating threat posed by the misuse of AI, specifically through the use of Large Language Models (LLMs)
Through a series of controlled experiments, the paper demonstrates how these models can be manipulated to bypass ethical and privacy safeguards to effectively generate cyber attacks.
We also introduce Occupy AI, a customized, finetuned LLM specifically engineered to automate and execute cyberattacks.
arXiv Detail & Related papers (2024-08-23T02:56:13Z) - Siren -- Advancing Cybersecurity through Deception and Adaptive Analysis [0.0]
This project employs sophisticated methods to lure potential threats into controlled environments.
The architectural framework includes a link monitoring proxy, a purpose-built machine learning model for dynamic link analysis.
The incorporation of simulated user activity extends the system's capacity to capture and learn from potential attackers.
arXiv Detail & Related papers (2024-06-10T12:47:49Z) - Use of Graph Neural Networks in Aiding Defensive Cyber Operations [2.1874189959020427]
Graph Neural Networks have emerged as a promising approach for enhancing the effectiveness of defensive measures.
We look into the application of GNNs in aiding to break each stage of one of the most renowned attack life cycles, the Lockheed Martin Cyber Kill Chain.
arXiv Detail & Related papers (2024-01-11T05:56:29Z) - A Diamond Model Analysis on Twitter's Biggest Hack [0.7252027234425332]
In this paper, we leverage the diamond model to perform an intrusion analysis case study of the 2020 Twitter account hijacking Cyberattack.
We follow this standardized incident response model to map the adversary, capability, infrastructure, and victim and perform a comprehensive analysis of the attack, and the impact posed by the attack from a Cybersecurity policy standpoint.
arXiv Detail & Related papers (2023-06-28T02:22:57Z) - Graph Mining for Cybersecurity: A Survey [61.505995908021525]
The explosive growth of cyber attacks nowadays, such as malware, spam, and intrusions, caused severe consequences on society.
Traditional Machine Learning (ML) based methods are extensively used in detecting cyber threats, but they hardly model the correlations between real-world cyber entities.
With the proliferation of graph mining techniques, many researchers investigated these techniques for capturing correlations between cyber entities and achieving high performance.
arXiv Detail & Related papers (2023-04-02T08:43:03Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - A System for Efficiently Hunting for Cyber Threats in Computer Systems
Using Threat Intelligence [78.23170229258162]
We build ThreatRaptor, a system that facilitates cyber threat hunting in computer systems using OSCTI.
ThreatRaptor provides (1) an unsupervised, light-weight, and accurate NLP pipeline that extracts structured threat behaviors from unstructured OSCTI text, (2) a concise and expressive domain-specific query language, TBQL, to hunt for malicious system activities, and (3) a query synthesis mechanism that automatically synthesizes a TBQL query from the extracted threat behaviors.
arXiv Detail & Related papers (2021-01-17T19:44:09Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z) - Deflecting Adversarial Attacks [94.85315681223702]
We present a new approach towards ending this cycle where we "deflect" adversarial attacks by causing the attacker to produce an input that resembles the attack's target class.
We first propose a stronger defense based on Capsule Networks that combines three detection mechanisms to achieve state-of-the-art detection performance.
arXiv Detail & Related papers (2020-02-18T06:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.