Reducing Bias in Modeling Real-world Password Strength via Deep Learning
and Dynamic Dictionaries
- URL: http://arxiv.org/abs/2010.12269v5
- Date: Fri, 26 Feb 2021 08:41:28 GMT
- Title: Reducing Bias in Modeling Real-world Password Strength via Deep Learning
and Dynamic Dictionaries
- Authors: Dario Pasquini, Marco Cianfriglia, Giuseppe Ateniese, Massimo
Bernaschi
- Abstract summary: We introduce a new generation of dictionary attacks that is consistently more resilient to inadequate configurations.
Requiring no supervision or domain-knowledge, this technique automatically approximates the advanced guessing strategies adopted by real-world attackers.
Our techniques enable more robust and sound password strength estimates within dictionary attacks, eventually reducing overestimation in modeling real-world threats in password security.
- Score: 13.436368800886479
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Password security hinges on an in-depth understanding of the techniques
adopted by attackers. Unfortunately, real-world adversaries resort to pragmatic
guessing strategies such as dictionary attacks that are inherently difficult to
model in password security studies. In order to be representative of the actual
threat, dictionary attacks must be thoughtfully configured and tuned. However,
this process requires a domain-knowledge and expertise that cannot be easily
replicated. The consequence of inaccurately calibrating dictionary attacks is
the unreliability of password security analyses, impaired by a severe
measurement bias.
In the present work, we introduce a new generation of dictionary attacks that
is consistently more resilient to inadequate configurations. Requiring no
supervision or domain-knowledge, this technique automatically approximates the
advanced guessing strategies adopted by real-world attackers. To achieve this:
(1) We use deep neural networks to model the proficiency of adversaries in
building attack configurations. (2) Then, we introduce dynamic guessing
strategies within dictionary attacks. These mimic experts' ability to adapt
their guessing strategies on the fly by incorporating knowledge on their
targets.
Our techniques enable more robust and sound password strength estimates
within dictionary attacks, eventually reducing overestimation in modeling
real-world threats in password security. Code available:
https://github.com/TheAdamProject/adams
Related papers
- Attention-Enhancing Backdoor Attacks Against BERT-based Models [54.070555070629105]
Investigating the strategies of backdoor attacks will help to understand the model's vulnerability.
We propose a novel Trojan Attention Loss (TAL) which enhances the Trojan behavior by directly manipulating the attention patterns.
arXiv Detail & Related papers (2023-10-23T01:24:56Z) - Detecting Backdoors in Deep Text Classifiers [43.36440869257781]
We present the first robust defence mechanism that generalizes to several backdoor attacks against text classification models.
Our technique is highly accurate at defending against state-of-the-art backdoor attacks, including data poisoning and weight poisoning.
arXiv Detail & Related papers (2022-10-11T07:48:03Z) - On Deep Learning in Password Guessing, a Survey [4.1499725848998965]
This paper compares various deep learning-based password guessing approaches that do not require domain knowledge or assumptions about users' password structures and combinations.
We propose a promising research experimental design on using variations of IWGAN on password guessing under non-targeted offline attacks.
arXiv Detail & Related papers (2022-08-22T15:48:35Z) - GNPassGAN: Improved Generative Adversarial Networks For Trawling Offline
Password Guessing [5.165256397719443]
This paper reviews various deep learning-based password guessing approaches.
It also introduces GNPassGAN, a password guessing tool built on generative adversarial networks for trawling offline attacks.
In comparison to the state-of-the-art PassGAN model, GNPassGAN is capable of guessing 88.03% more passwords and generating 31.69% fewer duplicates.
arXiv Detail & Related papers (2022-08-14T23:51:52Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - Learning-based Hybrid Local Search for the Hard-label Textual Attack [53.92227690452377]
We consider a rarely investigated but more rigorous setting, namely hard-label attack, in which the attacker could only access the prediction label.
Based on this observation, we propose a novel hard-label attack, called Learning-based Hybrid Local Search (LHLS) algorithm.
Our LHLS significantly outperforms existing hard-label attacks regarding the attack performance as well as adversary quality.
arXiv Detail & Related papers (2022-01-20T14:16:07Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - An Empirical Review of Adversarial Defenses [0.913755431537592]
Deep neural networks, which form the basis of such systems, are highly susceptible to a specific type of attack, called adversarial attacks.
A hacker can, even with bare minimum computation, generate adversarial examples (images or data points that belong to another class, but consistently fool the model to get misclassified as genuine) and crumble the basis of such algorithms.
We show two effective techniques, namely Dropout and Denoising Autoencoders, and show their success in preventing such attacks from fooling the model.
arXiv Detail & Related papers (2020-12-10T09:34:41Z) - Learning to Attack: Towards Textual Adversarial Attacking in Real-world
Situations [81.82518920087175]
Adversarial attacking aims to fool deep neural networks with adversarial examples.
We propose a reinforcement learning based attack model, which can learn from attack history and launch attacks more efficiently.
arXiv Detail & Related papers (2020-09-19T09:12:24Z) - Defense of Word-level Adversarial Attacks via Random Substitution
Encoding [0.5964792400314836]
adversarial attacks against deep neural networks on computer vision tasks have spawned many new technologies that help protect models from avoiding false predictions.
Recently, word-level adversarial attacks on deep models of Natural Language Processing (NLP) tasks have also demonstrated strong power, e.g., fooling a sentiment classification neural network to make wrong decisions.
We propose a novel framework called Random Substitution RSE, which introduces a random substitution into the training process of original neural networks.
arXiv Detail & Related papers (2020-05-01T15:28:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.