Exploring the Dark Side of AI: Advanced Phishing Attack Design and
Deployment Using ChatGPT
- URL: http://arxiv.org/abs/2309.10463v1
- Date: Tue, 19 Sep 2023 09:31:39 GMT
- Title: Exploring the Dark Side of AI: Advanced Phishing Attack Design and
Deployment Using ChatGPT
- Authors: Nils Begou, Jeremy Vinoy, Andrzej Duda, Maciej Korczynski
- Abstract summary: We make ChatGPT generate the following parts of a phishing attack.
We show that recent advances in AI underscore the potential risks of its misuse in phishing attacks.
- Score: 2.4178831487657937
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This paper explores the possibility of using ChatGPT to develop advanced
phishing attacks and automate their large-scale deployment. We make ChatGPT
generate the following parts of a phishing attack: i) cloning a targeted
website, ii) integrating code for stealing credentials, iii) obfuscating code,
iv) automating website deployment on a hosting provider, v) registering a
phishing domain name, and vi) integrating the website with a reverse proxy. The
initial assessment of the automatically generated phishing kits highlights
their rapid generation and deployment process as well as the close resemblance
of the resulting pages to the target website. More broadly, we demonstrate that
recent advances in AI underscore the potential risks of its misuse in phishing
attacks, which can lead to their increased prevalence and severity. This
highlights the necessity for enhanced countermeasures within AI systems.
Related papers
- From ML to LLM: Evaluating the Robustness of Phishing Webpage Detection Models against Adversarial Attacks [0.8050163120218178]
Phishing attacks attempt to deceive users into stealing sensitive information.
Current phishing webpage detection solutions are vulnerable to adversarial attacks.
We develop a tool that generates adversarial phishing webpages by embedding diverse phishing features into legitimate webpages.
arXiv Detail & Related papers (2024-07-29T18:21:34Z) - Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery
Detection [62.595450266262645]
This paper introduces a novel and previously unrecognized threat in face forgery detection scenarios caused by backdoor attack.
By embedding backdoors into models, attackers can deceive detectors into producing erroneous predictions for forged faces.
We propose emphPoisoned Forgery Face framework, which enables clean-label backdoor attacks on face forgery detectors.
arXiv Detail & Related papers (2024-02-18T06:31:05Z) - Mitigating Bias in Machine Learning Models for Phishing Webpage Detection [0.8050163120218178]
Phishing, a well-known cyberattack, revolves around the creation of phishing webpages and the dissemination of corresponding URLs.
Various techniques are available for preemptively categorizing zero-day phishing URLs by distilling unique attributes and constructing predictive models.
This proposal delves into persistent challenges within phishing detection solutions, particularly concentrated on the preliminary phase of assembling comprehensive datasets.
We propose a potential solution in the form of a tool engineered to alleviate bias in ML models.
arXiv Detail & Related papers (2024-01-16T13:45:54Z) - From Chatbots to PhishBots? -- Preventing Phishing scams created using
ChatGPT, Google Bard and Claude [3.7741995290294943]
This study explores the potential of using four popular commercially available Large Language Models to generate phishing attacks.
We build a BERT-based automated detection tool that can be used for the early detection of malicious prompts.
Our model is transferable across all four commercial LLMs, attaining an average accuracy of 96% for phishing website prompts and 94% for phishing email prompts.
arXiv Detail & Related papers (2023-10-29T22:52:40Z) - Generating Phishing Attacks using ChatGPT [1.392250707100996]
We identify several malicious prompts that can be provided to ChatGPT to generate functional phishing websites.
These attacks can be generated using vanilla ChatGPT without the need of any prior adversarial exploits.
arXiv Detail & Related papers (2023-05-09T02:38:05Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Enabling Efficient Cyber Threat Hunting With Cyber Threat Intelligence [94.94833077653998]
ThreatRaptor is a system that facilitates threat hunting in computer systems using open-source Cyber Threat Intelligence (OSCTI)
It extracts structured threat behaviors from unstructured OSCTI text and uses a concise and expressive domain-specific query language, TBQL, to hunt for malicious system activities.
Evaluations on a broad set of attack cases demonstrate the accuracy and efficiency of ThreatRaptor in practical threat hunting.
arXiv Detail & Related papers (2020-10-26T14:54:01Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z) - Phishing and Spear Phishing: examples in Cyber Espionage and techniques
to protect against them [91.3755431537592]
Phishing attacks have become the most used technique in the online scams, initiating more than 91% of cyberattacks, from 2012 onwards.
This study reviews how Phishing and Spear Phishing attacks are carried out by the phishers, through 5 steps which magnify the outcome.
arXiv Detail & Related papers (2020-05-31T18:10:09Z) - Advanced Evasion Attacks and Mitigations on Practical ML-Based Phishing
Website Classifiers [12.760638960844249]
We show that evasion attacks can be launched on ML-based anti-phishing classifiers even in the grey-, and black-box scenarios.
We propose three mutation-based attacks, differing in the knowledge of the target classifier, addressing a key technical challenge.
We demonstrate the effectiveness and efficiency of our evasion attacks on the state-of-the-art, Google's phishing page filter, achieved 100% attack success rate in less than one second per website.
arXiv Detail & Related papers (2020-04-15T09:04:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.