Expectation Entropy as a Password Strength Metric
- URL: http://arxiv.org/abs/2404.16853v1
- Date: Mon, 18 Mar 2024 15:03:37 GMT
- Title: Expectation Entropy as a Password Strength Metric
- Authors: Khan Reaz, Gerhard Wunder,
- Abstract summary: Expectation entropy can be applied to estimate the strength of any random or random-like password.
Having an 'Expectation entropy' of a certain value, for example, 0.4 means that an attacker has to exhaustively search at least 40% of the total number of guesses to find the password.
- Score: 1.4732811715354452
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The classical combinatorics-based password strength formula provides a result in tens of bits, whereas the NIST Entropy Estimation Suite give a result between 0 and 1 for Min-entropy. In this work, we present a newly developed metric -- Expectation entropy that can be applied to estimate the strength of any random or random-like password. Expectation entropy provides the strength of a password on the same scale as an entropy estimation tool. Having an 'Expectation entropy' of a certain value, for example, 0.4 means that an attacker has to exhaustively search at least 40\% of the total number of guesses to find the password.
Related papers
- How Well Does First-Token Entropy Approximate Word Entropy as a Psycholinguistic Predictor? [16.55240473621401]
Contextual entropy is a psycholinguistic measure capturing the anticipated difficulty of processing a word.<n>For convenience, entropy is typically estimated based on a language model's probability distribution over a word's first subword token.<n>We generate Monte Carlo estimates of word entropy that allow words to span a variable number of tokens.
arXiv Detail & Related papers (2025-07-29T20:12:50Z) - Quantum Computational Unpredictability Entropy and Quantum Leakage Resilience [0.0]
Computational entropies provide a framework for quantifying uncertainty and randomness under computational constraints.<n>We define quantum computational unpredictability entropy, a natural generalization of classical unpredictability entropy to the quantum setting.<n>Our results lay a foundation for developing cryptographic tools that rely on min-entropy in the quantum computational setting.
arXiv Detail & Related papers (2025-05-19T20:20:30Z) - Entropy-Based Block Pruning for Efficient Large Language Models [81.18339597023187]
We propose an entropy-based pruning strategy to enhance efficiency while maintaining performance.
Empirical analysis reveals that the entropy of hidden representations decreases in the early blocks but progressively increases across most subsequent blocks.
arXiv Detail & Related papers (2025-04-04T03:42:34Z) - Rényi divergence-based uniformity guarantees for $k$-universal hash functions [59.90381090395222]
Universal hash functions map the output of a source to random strings over a finite alphabet.
We show that it is possible to distill random bits that are nearly uniform, as measured by min-entropy.
arXiv Detail & Related papers (2024-10-21T19:37:35Z) - Pseudoentanglement Ain't Cheap [0.43123403062068827]
We show that any pseudoentangled state ensemble with a gap of $t$ bits of entropy requires $Omega(t)$ non-Clifford gates to prepare.
This is tight up to polylogarithmic factors if linear-time-secure pseudorandom functions exist.
arXiv Detail & Related papers (2024-03-29T19:39:59Z) - Limit of the Maximum Random Permutation Set Entropy [16.83953425640319]
The entropy of Random Permutation Set (RPS) and its corresponding maximum entropy have been proposed.
A new concept, the envelope of entropy function, is defined.
numerical examples validate the efficiency and conciseness of the proposed envelope.
arXiv Detail & Related papers (2024-03-10T13:04:09Z) - Security of discrete-modulated continuous-variable quantum key distribution [4.637027109495763]
Continuous variable quantum key distribution with discrete modulation has the potential to provide information-theoretic security.
We prove finite-size security against coherent attacks for a discrete-modulated quantum key distribution protocol.
arXiv Detail & Related papers (2023-03-16T12:14:07Z) - Fast Rates for Maximum Entropy Exploration [52.946307632704645]
We address the challenge of exploration in reinforcement learning (RL) when the agent operates in an unknown environment with sparse or no rewards.
We study the maximum entropy exploration problem two different types.
For visitation entropy, we propose a game-theoretic algorithm that has $widetildemathcalO(H3S2A/varepsilon2)$ sample complexity.
For the trajectory entropy, we propose a simple algorithm that has a sample of complexity of order $widetildemathcalO(mathrmpoly(S,
arXiv Detail & Related papers (2023-03-14T16:51:14Z) - Bounds on semi-device-independent quantum random number expansion
capabilities [0.0]
It's explicitly proved that the maximum certifiable entropy that can be obtained through this set of protocols is $-logleft[frac12left+frac1sqrt3right]$.
It's also established that certifiable entropy can be generated as soon as dimension witness crosses the classical bound, making the protocol noise-robust and useful in practical applications.
arXiv Detail & Related papers (2021-11-28T08:54:49Z) - Tight Exponential Analysis for Smoothing the Max-Relative Entropy and
for Quantum Privacy Amplification [56.61325554836984]
The max-relative entropy together with its smoothed version is a basic tool in quantum information theory.
We derive the exact exponent for the decay of the small modification of the quantum state in smoothing the max-relative entropy based on purified distance.
arXiv Detail & Related papers (2021-11-01T16:35:41Z) - Maximum Entropy Reinforcement Learning with Mixture Policies [54.291331971813364]
We construct a tractable approximation of the mixture entropy using MaxEnt algorithms.
We show that it is closely related to the sum of marginal entropies.
We derive an algorithmic variant of Soft Actor-Critic (SAC) to the mixture policy case and evaluate it on a series of continuous control tasks.
arXiv Detail & Related papers (2021-03-18T11:23:39Z) - Action Redundancy in Reinforcement Learning [54.291331971813364]
We show that transition entropy can be described by two terms; namely, model-dependent transition entropy and action redundancy.
Our results suggest that action redundancy is a fundamental problem in reinforcement learning.
arXiv Detail & Related papers (2021-02-22T19:47:26Z) - Profile Entropy: A Fundamental Measure for the Learnability and
Compressibility of Discrete Distributions [63.60499266361255]
We show that for samples of discrete distributions, profile entropy is a fundamental measure unifying the concepts of estimation, inference, and compression.
Specifically, profile entropy a) determines the speed of estimating the distribution relative to the best natural estimator; b) characterizes the rate of inferring all symmetric properties compared with the best estimator over any label-invariant distribution collection; c) serves as the limit of profile compression.
arXiv Detail & Related papers (2020-02-26T17:49:04Z) - Guesswork with Quantum Side Information [12.043574473965318]
We show that a general guessing strategy is equivalent to performing a single measurement and choosing a guessing strategy.
We evaluate the guesswork for a simple example involving the BB84 states, both numerically and analytically.
arXiv Detail & Related papers (2020-01-10T18:25:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.