Capture the Bot: Using Adversarial Examples to Improve CAPTCHA
Robustness to Bot Attacks
- URL: http://arxiv.org/abs/2010.16204v2
- Date: Wed, 4 Nov 2020 07:53:16 GMT
- Title: Capture the Bot: Using Adversarial Examples to Improve CAPTCHA
Robustness to Bot Attacks
- Authors: Dorjan Hitaj, Briland Hitaj, Sushil Jajodia, Luigi V. Mancini
- Abstract summary: We introduce CAPTURE, a novel CAPTCHA scheme based on adversarial examples.
Our empirical evaluations show that CAPTURE can produce CAPTCHAs that are easy to solve by humans while at the same time, effectively thwarting ML-based bot solvers.
- Score: 4.498333418544154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To this date, CAPTCHAs have served as the first line of defense preventing
unauthorized access by (malicious) bots to web-based services, while at the
same time maintaining a trouble-free experience for human visitors. However,
recent work in the literature has provided evidence of sophisticated bots that
make use of advancements in machine learning (ML) to easily bypass existing
CAPTCHA-based defenses. In this work, we take the first step to address this
problem. We introduce CAPTURE, a novel CAPTCHA scheme based on adversarial
examples. While typically adversarial examples are used to lead an ML model
astray, with CAPTURE, we attempt to make a "good use" of such mechanisms. Our
empirical evaluations show that CAPTURE can produce CAPTCHAs that are easy to
solve by humans while at the same time, effectively thwarting ML-based bot
solvers.
Related papers
- Breaking reCAPTCHAv2 [20.706469085872516]
We evaluate the effectiveness of automated systems in solving captchas by utilizing advanced YOLO models for image segmentation and classification.
Our findings suggest that there is no significant difference in the number of challenges humans and bots must solve to pass the captchas in reCAPTCHAv2.
arXiv Detail & Related papers (2024-09-13T13:47:12Z) - A Survey of Adversarial CAPTCHAs on its History, Classification and
Generation [69.36242543069123]
We extend the definition of adversarial CAPTCHAs and propose a classification method for adversarial CAPTCHAs.
Also, we analyze some defense methods that can be used to defend adversarial CAPTCHAs, indicating potential threats to adversarial CAPTCHAs.
arXiv Detail & Related papers (2023-11-22T08:44:58Z) - My Brother Helps Me: Node Injection Based Adversarial Attack on Social Bot Detection [69.99192868521564]
Social platforms such as Twitter are under siege from a multitude of fraudulent users.
Due to the structure of social networks, the majority of methods are based on the graph neural network(GNN), which is susceptible to attacks.
We propose a node injection-based adversarial attack method designed to deceive bot detection models.
arXiv Detail & Related papers (2023-10-11T03:09:48Z) - Vulnerability analysis of captcha using Deep learning [0.0]
This research investigates the flaws and vulnerabilities in the CAPTCHA generating systems.
To achieve this, we created CapNet, a Convolutional Neural Network.
The proposed platform can evaluate both numerical and alphanumerical CAPTCHAs
arXiv Detail & Related papers (2023-02-18T17:45:11Z) - Identification of Twitter Bots based on an Explainable ML Framework: the
US 2020 Elections Case Study [72.61531092316092]
This paper focuses on the design of a novel system for identifying Twitter bots based on labeled Twitter data.
Supervised machine learning (ML) framework is adopted using an Extreme Gradient Boosting (XGBoost) algorithm.
Our study also deploys Shapley Additive Explanations (SHAP) for explaining the ML model predictions.
arXiv Detail & Related papers (2021-12-08T14:12:24Z) - Robust Text CAPTCHAs Using Adversarial Examples [129.29523847765952]
We propose a user-friendly text-based CAPTCHA generation method named Robust Text CAPTCHA (RTC)
At the first stage, the foregrounds and backgrounds are constructed with randomly sampled font and background images.
At the second stage, we apply a highly transferable adversarial attack for text CAPTCHAs to better obstruct CAPTCHA solvers.
arXiv Detail & Related papers (2021-01-07T11:03:07Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z) - Detection of Novel Social Bots by Ensembles of Specialized Classifiers [60.63582690037839]
Malicious actors create inauthentic social media accounts controlled in part by algorithms, known as social bots, to disseminate misinformation and agitate online discussion.
We show that different types of bots are characterized by different behavioral features.
We propose a new supervised learning method that trains classifiers specialized for each class of bots and combines their decisions through the maximum rule.
arXiv Detail & Related papers (2020-06-11T22:59:59Z) - Deceiving computers in Reverse Turing Test through Deep Learning [0.0]
Almost every website and service providers today have the process of checking whether their website is being crawled or not by automated bots.
The aim of this investigation is to check whether the use of a subset of commonly used CAPTCHAs, known as the text CAPTCHA is a reliable process for verifying their human customers.
arXiv Detail & Related papers (2020-06-01T10:11:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.