Password Strength Analysis Through Social Network Data Exposure: A Combined Approach Relying on Data Reconstruction and Generative Models
- URL: http://arxiv.org/abs/2511.16716v1
- Date: Thu, 20 Nov 2025 18:34:33 GMT
- Title: Password Strength Analysis Through Social Network Data Exposure: A Combined Approach Relying on Data Reconstruction and Generative Models
- Authors: Maurizio Atzori, Eleonora Calò, Loredana Caruccio, Stefano Cirillo, Giuseppe Polese, Giandomenico Solimando,
- Abstract summary: We present SODA, a data reconstruction tool designed to enhance evaluation processes related to the password strength.<n>In particular, SODA integrates a specialized password evaluation module aimed at evaluating password strength by leveraging publicly available data.<n>We also investigate the capabilities and risks associated with emerging Large Language Models (LLMs) in generating passwords.
- Score: 3.4879868100629356
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although passwords remain the primary defense against unauthorized access, users often tend to use passwords that are easy to remember. This behavior significantly increases security risks, also due to the fact that traditional password strength evaluation methods are often inadequate. In this discussion paper, we present SODA ADVANCE, a data reconstruction tool also designed to enhance evaluation processes related to the password strength. In particular, SODA ADVANCE integrates a specialized module aimed at evaluating password strength by leveraging publicly available data from multiple sources, including social media platforms. Moreover, we investigate the capabilities and risks associated with emerging Large Language Models (LLMs) in evaluating and generating passwords, respectively. Experimental assessments conducted with 100 real users demonstrate that LLMs can generate strong and personalized passwords possibly defined according to user profiles. Additionally, LLMs were shown to be effective in evaluating passwords, especially when they can take into account user profile data.
Related papers
- KAPG: Adaptive Password Guessing via Knowledge-Augmented Generation [7.1409672981861485]
We propose a knowledge-augmented password guessing framework that integrates external lexical knowledge into the guessing process.<n>KnowGuess achieves average improvements of 36.5% and 74.7% over state-of-the-art models in intra-site and cross-site scenarios.<n>We also develop KAPSM, a trend-aware and site-specific password strength meter.
arXiv Detail & Related papers (2025-10-27T06:03:08Z) - When Intelligence Fails: An Empirical Study on Why LLMs Struggle with Password Cracking [0.41998444721319217]
We conduct an empirical investigation into the efficacy of pre-trained Large Language Models for password cracking using synthetic user profiles.<n>We evaluate the performance of state-of-the-art open-source LLMs by prompting them to generate plausible passwords based on structured user attributes.<n>Our results, measured using Hit@1, Hit@5, and Hit@10 metrics, reveal consistently poor performance, with all models achieving less than 1.5% accuracy at Hit@10.
arXiv Detail & Related papers (2025-10-18T02:15:28Z) - How Blind and Low-Vision Users Manage Their Passwords [58.76726339294067]
This paper investigates how Blind and Low-Vision (BLV) users tackle password management.<n>We found that all participants utilize password managers to some extent, which they perceive as fairly accessible.<n>The security advantages - generating strong, random passwords - were avoided mainly due to the absence of practical accessibility.
arXiv Detail & Related papers (2025-10-15T13:33:45Z) - Evaluating Language Model Reasoning about Confidential Information [95.64687778185703]
We study whether language models exhibit contextual robustness, or the capability to adhere to context-dependent safety specifications.<n>We develop a benchmark (PasswordEval) that measures whether language models can correctly determine when a user request is authorized.<n>We find that current open- and closed-source models struggle with this seemingly simple task, and that, perhaps surprisingly, reasoning capabilities do not generally improve performance.
arXiv Detail & Related papers (2025-08-27T15:39:46Z) - PassTSL: Modeling Human-Created Passwords through Two-Stage Learning [7.287089766975719]
We propose PassTSL (modeling human-created Passwords through Two-Stage Learning), inspired by the popular pretraining-finetuning framework in NLP and deep learning (DL)
PassTSL outperforms five state-of-the-art (SOTA) password cracking methods on password guessing by a significant margin ranging from 4.11% to 64.69% at the maximum point.
Based on PassTSL, we also implemented a password strength meter (PSM), and our experiments showed that it was able to estimate password strength more accurately.
arXiv Detail & Related papers (2024-07-19T09:23:30Z) - Robust Utility-Preserving Text Anonymization Based on Large Language Models [80.5266278002083]
Anonymizing text that contains sensitive information is crucial for a wide range of applications.<n>Existing techniques face the emerging challenges of the re-identification ability of large language models.<n>We propose a framework composed of three key components: a privacy evaluator, a utility evaluator, and an optimization component.
arXiv Detail & Related papers (2024-07-16T14:28:56Z) - "Glue pizza and eat rocks" -- Exploiting Vulnerabilities in Retrieval-Augmented Generative Models [74.05368440735468]
Retrieval-Augmented Generative (RAG) models enhance Large Language Models (LLMs)
In this paper, we demonstrate a security threat where adversaries can exploit the openness of these knowledge bases.
arXiv Detail & Related papers (2024-06-26T05:36:23Z) - Nudging Users to Change Breached Passwords Using the Protection Motivation Theory [58.87688846800743]
We draw on the Protection Motivation Theory (PMT) to design nudges that encourage users to change breached passwords.
Our study contributes to PMT's application in security research and provides concrete design implications for improving compromised credential notifications.
arXiv Detail & Related papers (2024-05-24T07:51:15Z) - PassViz: A Visualisation System for Analysing Leaked Passwords [2.2530496464901106]
PassViz is a command-line tool for visualising and analysing leaked passwords in a 2-D space.
We show how PassViz can be used to visually analyse different aspects of leaked passwords and to facilitate the discovery of previously unknown password patterns.
arXiv Detail & Related papers (2023-09-22T16:06:26Z) - PassGPT: Password Modeling and (Guided) Generation with Large Language
Models [59.11160990637616]
We present PassGPT, a large language model trained on password leaks for password generation.
We also introduce the concept of guided password generation, where we leverage PassGPT sampling procedure to generate passwords matching arbitrary constraints.
arXiv Detail & Related papers (2023-06-02T13:49:53Z) - Targeted Honeyword Generation with Language Models [5.165256397719443]
Honeywords are fictitious passwords inserted into databases to identify password breaches.
Major difficulty is how to produce honeywords that are difficult to distinguish from real passwords.
arXiv Detail & Related papers (2022-08-15T00:06:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.