Generative Deep Learning Techniques for Password Generation
- URL: http://arxiv.org/abs/2012.05685v2
- Date: Wed, 16 Dec 2020 20:43:19 GMT
- Title: Generative Deep Learning Techniques for Password Generation
- Authors: David Biesner, Kostadin Cvejoski, Bogdan Georgiev, Rafet Sifa, Erik
Krupicka
- Abstract summary: We study a broad collection of deep learning and probabilistic based models in the light of password guessing.
We provide novel generative deep-learning models in terms of variational autoencoders exhibiting state-of-art sampling performance.
We perform a thorough empirical analysis in a unified controlled framework over well-known datasets.
- Score: 0.5249805590164902
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Password guessing approaches via deep learning have recently been
investigated with significant breakthroughs in their ability to generate novel,
realistic password candidates. In the present work we study a broad collection
of deep learning and probabilistic based models in the light of password
guessing: attention-based deep neural networks, autoencoding mechanisms and
generative adversarial networks. We provide novel generative deep-learning
models in terms of variational autoencoders exhibiting state-of-art sampling
performance, yielding additional latent-space features such as interpolations
and targeted sampling. Lastly, we perform a thorough empirical analysis in a
unified controlled framework over well-known datasets (RockYou, LinkedIn,
Youku, Zomato, Pwnd). Our results not only identify the most promising schemes
driven by deep neural networks, but also illustrate the strengths of each
approach in terms of generation variability and sample uniqueness.
Related papers
- Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Understanding Activation Patterns in Artificial Neural Networks by
Exploring Stochastic Processes [0.0]
We propose utilizing the framework of processes, which has been underutilized thus far.
We focus solely on activation frequency, leveraging neuroscience techniques used for real neuron spike trains.
We derive parameters describing activation patterns in each network, revealing consistent differences across architectures and training sets.
arXiv Detail & Related papers (2023-08-01T22:12:30Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - Interpretability of an Interaction Network for identifying $H
\rightarrow b\bar{b}$ jets [4.553120911976256]
In recent times, AI models based on deep neural networks are becoming increasingly popular for many of these applications.
We explore interpretability of AI models by examining an Interaction Network (IN) model designed to identify boosted $Hto bbarb$ jets.
We additionally illustrate the activity of hidden layers within the IN model as Neural Activation Pattern (NAP) diagrams.
arXiv Detail & Related papers (2022-11-23T08:38:52Z) - Deep Latent-Variable Models for Text Generation [7.119436003155924]
Deep neural network-based end-to-end architectures have been widely adopted.
End-to-end approach conflates all sub-modules, which used to be designed by complex handcrafted rules, into a holistic encode-decode architecture.
This dissertation presents how deep latent-variable models can improve over the standard encoder-decoder model for text generation.
arXiv Detail & Related papers (2022-03-03T23:06:39Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - A Comprehensive Survey on Community Detection with Deep Learning [93.40332347374712]
A community reveals the features and connections of its members that are different from those in other communities in a network.
This survey devises and proposes a new taxonomy covering different categories of the state-of-the-art methods.
The main category, i.e., deep neural networks, is further divided into convolutional networks, graph attention networks, generative adversarial networks and autoencoders.
arXiv Detail & Related papers (2021-05-26T14:37:07Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Factorized Deep Generative Models for Trajectory Generation with
Spatiotemporal-Validity Constraints [10.960924101404498]
Deep generative models for trajectory data can learn expressively explanatory models for sophisticated latent patterns.
We first propose novel deep generative models factorizing time-variant and time-invariant latent variables.
We then develop new inference strategies based on variational inference and constrained optimization to thetemporal validity.
arXiv Detail & Related papers (2020-09-20T02:06:36Z) - SOCRATES: Towards a Unified Platform for Neural Network Analysis [7.318255652722096]
We aim to build a unified framework for developing techniques to analyze neural networks.
We develop a platform called SOCRATES which supports a standardized format for a variety of neural network models.
Experiment results show that our platform can handle a wide range of networks models and properties.
arXiv Detail & Related papers (2020-07-22T05:18:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.