An End-to-End Attack on Text-based CAPTCHAs Based on Cycle-Consistent
Generative Adversarial Network
- URL: http://arxiv.org/abs/2008.11603v1
- Date: Wed, 26 Aug 2020 14:57:47 GMT
- Title: An End-to-End Attack on Text-based CAPTCHAs Based on Cycle-Consistent
Generative Adversarial Network
- Authors: Chunhui Li, Xingshu Chen, Haizhou Wang, Yu Zhang, Peiming Wang
- Abstract summary: We propose an efficient and simple end-to-end attack method based on cycle-consistent generative adversarial networks.
It can attack common text-based CAPTCHA schemes only by modifying a few configuration parameters.
Our approach efficiently cracked the CAPTCHA schemes deployed by 10 popular websites.
- Score: 4.955311532191887
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a widely deployed security scheme, text-based CAPTCHAs have become more
and more difficult to resist machine learning-based attacks. So far, many
researchers have conducted attacking research on text-based CAPTCHAs deployed
by different companies (such as Microsoft, Amazon, and Apple) and achieved
certain results.However, most of these attacks have some shortcomings, such as
poor portability of attack methods, requiring a series of data preprocessing
steps, and relying on large amounts of labeled CAPTCHAs. In this paper, we
propose an efficient and simple end-to-end attack method based on
cycle-consistent generative adversarial networks. Compared with previous
studies, our method greatly reduces the cost of data labeling. In addition,
this method has high portability. It can attack common text-based CAPTCHA
schemes only by modifying a few configuration parameters, which makes the
attack easier. Firstly, we train CAPTCHA synthesizers based on the cycle-GAN to
generate some fake samples. Basic recognizers based on the convolutional
recurrent neural network are trained with the fake data. Subsequently, an
active transfer learning method is employed to optimize the basic recognizer
utilizing tiny amounts of labeled real-world CAPTCHA samples. Our approach
efficiently cracked the CAPTCHA schemes deployed by 10 popular websites,
indicating that our attack is likely very general. Additionally, we analyzed
the current most popular anti-recognition mechanisms. The results show that the
combination of more anti-recognition mechanisms can improve the security of
CAPTCHA, but the improvement is limited. Conversely, generating more complex
CAPTCHAs may cost more resources and reduce the availability of CAPTCHAs.
Related papers
- Unveiling Vulnerability of Self-Attention [61.85150061213987]
Pre-trained language models (PLMs) are shown to be vulnerable to minor word changes.
This paper studies the basic structure of transformer-based PLMs, the self-attention (SA) mechanism.
We introduce textitS-Attend, a novel smoothing technique that effectively makes SA robust via structural perturbations.
arXiv Detail & Related papers (2024-02-26T10:31:45Z) - A Survey of Adversarial CAPTCHAs on its History, Classification and
Generation [69.36242543069123]
We extend the definition of adversarial CAPTCHAs and propose a classification method for adversarial CAPTCHAs.
Also, we analyze some defense methods that can be used to defend adversarial CAPTCHAs, indicating potential threats to adversarial CAPTCHAs.
arXiv Detail & Related papers (2023-11-22T08:44:58Z) - Diff-CAPTCHA: An Image-based CAPTCHA with Security Enhanced by Denoising
Diffusion Model [2.1551899143698328]
Diff-CAPTCHA is an image-click CAPTCHA scheme based on diffusion models.
This paper develops several attack methods, including end-to-end attacks based on Faster R-CNN and two-stage attacks.
Results show that diffusion models can effectively enhance CAPTCHA security while maintaining good usability in human testing.
arXiv Detail & Related papers (2023-08-16T13:41:29Z) - EnSolver: Uncertainty-Aware Ensemble CAPTCHA Solvers with Theoretical Guarantees [1.9649272351760065]
We propose Enr, a family of solvers that use deep ensemble uncertainty to detect and skip out-of-distribution CAPTCHAs.
We prove novel theoretical bounds on the effectiveness of our solvers and demonstrate their use with state-of-the-art CAPTCHA solvers.
arXiv Detail & Related papers (2023-07-27T20:19:11Z) - Backdoor Attacks Against Deep Image Compression via Adaptive Frequency
Trigger [106.10954454667757]
We present a novel backdoor attack with multiple triggers against learned image compression models.
Motivated by the widely used discrete cosine transform (DCT) in existing compression systems and standards, we propose a frequency-based trigger injection model.
arXiv Detail & Related papers (2023-02-28T15:39:31Z) - Vulnerability analysis of captcha using Deep learning [0.0]
This research investigates the flaws and vulnerabilities in the CAPTCHA generating systems.
To achieve this, we created CapNet, a Convolutional Neural Network.
The proposed platform can evaluate both numerical and alphanumerical CAPTCHAs
arXiv Detail & Related papers (2023-02-18T17:45:11Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Learning-based Hybrid Local Search for the Hard-label Textual Attack [53.92227690452377]
We consider a rarely investigated but more rigorous setting, namely hard-label attack, in which the attacker could only access the prediction label.
Based on this observation, we propose a novel hard-label attack, called Learning-based Hybrid Local Search (LHLS) algorithm.
Our LHLS significantly outperforms existing hard-label attacks regarding the attack performance as well as adversary quality.
arXiv Detail & Related papers (2022-01-20T14:16:07Z) - Robust Text CAPTCHAs Using Adversarial Examples [129.29523847765952]
We propose a user-friendly text-based CAPTCHA generation method named Robust Text CAPTCHA (RTC)
At the first stage, the foregrounds and backgrounds are constructed with randomly sampled font and background images.
At the second stage, we apply a highly transferable adversarial attack for text CAPTCHAs to better obstruct CAPTCHA solvers.
arXiv Detail & Related papers (2021-01-07T11:03:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.