What hackers talk about when they talk about AI: Early-stage diffusion of a cybercrime innovation
- URL: http://arxiv.org/abs/2602.14783v1
- Date: Mon, 16 Feb 2026 14:31:33 GMT
- Title: What hackers talk about when they talk about AI: Early-stage diffusion of a cybercrime innovation
- Authors: BenoƮt Dupont, Chad Whelan, Serge-Olivier Paquette,
- Abstract summary: The rapid expansion of artificial intelligence (AI) is raising concerns about its potential to transform cybercrime.<n>This paper examines the evolving relationship between cybercriminals and AI using a unique dataset from a cyber threat intelligence platform.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The rapid expansion of artificial intelligence (AI) is raising concerns about its potential to transform cybercrime. Beyond empowering novice offenders, AI stands to intensify the scale and sophistication of attacks by seasoned cybercriminals. This paper examines the evolving relationship between cybercriminals and AI using a unique dataset from a cyber threat intelligence platform. Analyzing more than 160 cybercrime forum conversations collected over seven months, our research reveals how cybercriminals understand AI and discuss how they can exploit its capabilities. Their exchanges reflect growing curiosity about AI's criminal applications through legal tools and dedicated criminal tools, but also doubts and anxieties about AI's effectiveness and its effects on their business models and operational security. The study documents attempts to misuse legitimate AI tools and develop bespoke models tailored for illicit purposes. Combining the diffusion of innovation framework with thematic analysis, the paper provides an in-depth view of emerging AI-enabled cybercrime and offers practical insights for law enforcement and policymakers.
Related papers
- Exposing the Impact of GenAI for Cybercrime: An Investigation into the Dark Side [1.0613539657019528]
generative AI models have sparked significant debate over safety, ethical risks, and dual-use concerns.<n>This paper provides empirical evidence regarding generative AI's association with malicious internet-related activities and cybercrime.
arXiv Detail & Related papers (2025-05-29T17:57:01Z) - Frontier AI's Impact on the Cybersecurity Landscape [46.32458228179959]
We find that while AI is already widely used in attacks, its application in defense remains limited.<n>Experts expect AI to continue favoring attackers over defenders, though the gap will gradually narrow.
arXiv Detail & Related papers (2025-04-07T18:25:18Z) - A Survey on Offensive AI Within Cybersecurity [1.8206461789819075]
This survey paper on offensive AI will comprehensively cover various aspects related to attacks against and using AI systems.
It will delve into the impact of offensive AI practices on different domains, including consumer, enterprise, and public digital infrastructure.
The paper will explore adversarial machine learning, attacks against AI models, infrastructure, and interfaces, along with offensive techniques like information gathering, social engineering, and weaponized AI.
arXiv Detail & Related papers (2024-09-26T17:36:22Z) - Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks [0.0]
This paper delves into the escalating threat posed by the misuse of AI, specifically through the use of Large Language Models (LLMs)
Through a series of controlled experiments, the paper demonstrates how these models can be manipulated to bypass ethical and privacy safeguards to effectively generate cyber attacks.
We also introduce Occupy AI, a customized, finetuned LLM specifically engineered to automate and execute cyberattacks.
arXiv Detail & Related papers (2024-08-23T02:56:13Z) - Artificial Intelligence as the New Hacker: Developing Agents for Offensive Security [0.0]
This paper explores the integration of Artificial Intelligence (AI) into offensive cybersecurity.
It develops an autonomous AI agent, ReaperAI, designed to simulate and execute cyberattacks.
ReaperAI demonstrates the potential to identify, exploit, and analyze security vulnerabilities autonomously.
arXiv Detail & Related papers (2024-05-09T18:15:12Z) - Towards more Practical Threat Models in Artificial Intelligence Security [66.67624011455423]
Recent works have identified a gap between research and practice in artificial intelligence security.
We revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice.
arXiv Detail & Related papers (2023-11-16T16:09:44Z) - Artificial Intelligence for Cybersecurity: Threats, Attacks and
Mitigation [1.80476943513092]
The surging menace of cyber-attacks got a jolt from the recent advancements in Artificial Intelligence.
The intervention of AI not only automates a particular task but also improves efficiency by many folds.
This article discusses cybersecurity and cyber threats along with both conventional and intelligent ways of defense against cyber-attacks.
arXiv Detail & Related papers (2022-09-27T15:20:23Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - The Threat of Offensive AI to Organizations [52.011307264694665]
This survey explores the threat of offensive AI on organizations.
First, we discuss how AI changes the adversary's methods, strategies, goals, and overall attack model.
Then, through a literature review, we identify 33 offensive AI capabilities which adversaries can use to enhance their attacks.
arXiv Detail & Related papers (2021-06-30T01:03:28Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.