A Survey of Adversarial CAPTCHAs on its History, Classification and
Generation
- URL: http://arxiv.org/abs/2311.13233v1
- Date: Wed, 22 Nov 2023 08:44:58 GMT
- Title: A Survey of Adversarial CAPTCHAs on its History, Classification and
Generation
- Authors: Zisheng Xu, Qiao Yan, F. Richard Yu, Victor C. M. Leung
- Abstract summary: We extend the definition of adversarial CAPTCHAs and propose a classification method for adversarial CAPTCHAs.
Also, we analyze some defense methods that can be used to defend adversarial CAPTCHAs, indicating potential threats to adversarial CAPTCHAs.
- Score: 69.36242543069123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Completely Automated Public Turing test to tell Computers and Humans Apart,
short for CAPTCHA, is an essential and relatively easy way to defend against
malicious attacks implemented by bots. The security and usability trade-off
limits the use of massive geometric transformations to interfere deep model
recognition and deep models even outperformed humans in complex CAPTCHAs. The
discovery of adversarial examples provides an ideal solution to the security
and usability trade-off by integrating adversarial examples and CAPTCHAs to
generate adversarial CAPTCHAs that can fool the deep models. In this paper, we
extend the definition of adversarial CAPTCHAs and propose a classification
method for adversarial CAPTCHAs. Then we systematically review some commonly
used methods to generate adversarial examples and methods that are successfully
used to generate adversarial CAPTCHAs. Also, we analyze some defense methods
that can be used to defend adversarial CAPTCHAs, indicating potential threats
to adversarial CAPTCHAs. Finally, we discuss some possible future research
directions for adversarial CAPTCHAs at the end of this paper.
Related papers
- MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - Oedipus: LLM-enchanced Reasoning CAPTCHA Solver [17.074422329618212]
Oedipus is an innovative end-to-end framework for automated reasoning CAPTCHA solving.
Central to this framework is a novel strategy that dissects the complex and human-easy-AI-hard tasks into a sequence of simpler and AI-easy steps.
Our evaluation shows that Oedipus effectively resolves the studied CAPTCHAs, achieving an average success rate of 63.5%.
arXiv Detail & Related papers (2024-05-13T06:32:57Z) - Token-Level Adversarial Prompt Detection Based on Perplexity Measures
and Contextual Information [67.78183175605761]
Large Language Models are susceptible to adversarial prompt attacks.
This vulnerability underscores a significant concern regarding the robustness and reliability of LLMs.
We introduce a novel approach to detecting adversarial prompts at a token level.
arXiv Detail & Related papers (2023-11-20T03:17:21Z) - EnSolver: Uncertainty-Aware Ensemble CAPTCHA Solvers with Theoretical Guarantees [1.9649272351760065]
We propose Enr, a family of solvers that use deep ensemble uncertainty to detect and skip out-of-distribution CAPTCHAs.
We prove novel theoretical bounds on the effectiveness of our solvers and demonstrate their use with state-of-the-art CAPTCHA solvers.
arXiv Detail & Related papers (2023-07-27T20:19:11Z) - Vulnerability analysis of captcha using Deep learning [0.0]
This research investigates the flaws and vulnerabilities in the CAPTCHA generating systems.
To achieve this, we created CapNet, a Convolutional Neural Network.
The proposed platform can evaluate both numerical and alphanumerical CAPTCHAs
arXiv Detail & Related papers (2023-02-18T17:45:11Z) - Characterizing the adversarial vulnerability of speech self-supervised
learning [95.03389072594243]
We make the first attempt to investigate the adversarial vulnerability of such paradigm under the attacks from both zero-knowledge adversaries and limited-knowledge adversaries.
The experimental results illustrate that the paradigm proposed by SUPERB is seriously vulnerable to limited-knowledge adversaries.
arXiv Detail & Related papers (2021-11-08T08:44:04Z) - Robust Text CAPTCHAs Using Adversarial Examples [129.29523847765952]
We propose a user-friendly text-based CAPTCHA generation method named Robust Text CAPTCHA (RTC)
At the first stage, the foregrounds and backgrounds are constructed with randomly sampled font and background images.
At the second stage, we apply a highly transferable adversarial attack for text CAPTCHAs to better obstruct CAPTCHA solvers.
arXiv Detail & Related papers (2021-01-07T11:03:07Z) - An End-to-End Attack on Text-based CAPTCHAs Based on Cycle-Consistent
Generative Adversarial Network [4.955311532191887]
We propose an efficient and simple end-to-end attack method based on cycle-consistent generative adversarial networks.
It can attack common text-based CAPTCHA schemes only by modifying a few configuration parameters.
Our approach efficiently cracked the CAPTCHA schemes deployed by 10 popular websites.
arXiv Detail & Related papers (2020-08-26T14:57:47Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.