Exploiting Leakage in Password Managers via Injection Attacks
- URL: http://arxiv.org/abs/2408.07054v1
- Date: Tue, 13 Aug 2024 17:45:12 GMT
- Title: Exploiting Leakage in Password Managers via Injection Attacks
- Authors: Andrés Fábrega, Armin Namavari, Rachit Agarwal, Ben Nassi, Thomas Ristenpart,
- Abstract summary: This work explores injection attacks against password managers.
In this setting, the adversary controls their own application client, which they use to "inject" chosen payloads to a victim's client via, for example, sharing credentials with them.
- Score: 16.120271337898235
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work explores injection attacks against password managers. In this setting, the adversary (only) controls their own application client, which they use to "inject" chosen payloads to a victim's client via, for example, sharing credentials with them. The injections are interleaved with adversarial observations of some form of protected state (such as encrypted vault exports or the network traffic received by the application servers), from which the adversary backs out confidential information. We uncover a series of general design patterns in popular password managers that lead to vulnerabilities allowing an adversary to efficiently recover passwords, URLs, usernames, and attachments. We develop general attack templates to exploit these design patterns and experimentally showcase their practical efficacy via analysis of ten distinct password manager applications. We disclosed our findings to these vendors, many of which deployed mitigations.
Related papers
- Injection Attacks Against End-to-End Encrypted Applications [15.213316952755353]
We explore an emerging threat model for end-to-end (E2E) encrypted applications.
An adversary sends chosen messages to a target client, thereby "injecting" adversarial content into the application state.
By observing the lengths of the resulting cloud-stored ciphertexts, the attacker backs out confidential information.
arXiv Detail & Related papers (2024-11-14T06:53:00Z) - Nudging Users to Change Breached Passwords Using the Protection Motivation Theory [58.87688846800743]
We draw on the Protection Motivation Theory (PMT) to design nudges that encourage users to change breached passwords.
Our study contributes to PMT's application in security research and provides concrete design implications for improving compromised credential notifications.
arXiv Detail & Related papers (2024-05-24T07:51:15Z) - EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding Inspection [53.25863925815954]
Federated self-supervised learning (FSSL) has emerged as a promising paradigm that enables the exploitation of clients' vast amounts of unlabeled data.
While FSSL offers advantages, its susceptibility to backdoor attacks has not been investigated.
We propose the Embedding Inspector (EmInspector) that detects malicious clients by inspecting the embedding space of local models.
arXiv Detail & Related papers (2024-05-21T06:14:49Z) - Passwords Are Meant to Be Secret: A Practical Secure Password Entry Channel for Web Browsers [7.049738935364298]
Malicious client-side scripts and browser extensions can steal passwords after they have been autofilled by the manager into the web page.
This paper explores what role the password manager can take in preventing the theft of autofilled credentials without requiring a change to user behavior.
arXiv Detail & Related papers (2024-02-09T03:21:14Z) - Rethinking Backdoor Attacks [122.1008188058615]
In a backdoor attack, an adversary inserts maliciously constructed backdoor examples into a training set to make the resulting model vulnerable to manipulation.
Defending against such attacks typically involves viewing these inserted examples as outliers in the training set and using techniques from robust statistics to detect and remove them.
We show that without structural information about the training data distribution, backdoor attacks are indistinguishable from naturally-occurring features in the data.
arXiv Detail & Related papers (2023-07-19T17:44:54Z) - Tales from the Git: Automating the detection of secrets on code and
assessing developers' passwords choices [8.086010366384247]
This is the first study investigating the developer traits in password selection across different programming languages and contexts.
Despite the fact that developers may have carelessly leaked their code on public repositories, our findings indicate that they tend to use significantly more secure passwords.
arXiv Detail & Related papers (2023-07-03T09:44:10Z) - Conditional Generative Adversarial Network for keystroke presentation
attack [0.0]
We propose to study a new approach aiming to deploy a presentation attack towards a keystroke authentication system.
Our idea is to use Conditional Generative Adversarial Networks (cGAN) for generating synthetic keystroke data that can be used for impersonating an authorized user.
Results indicate that the cGAN can effectively generate keystroke dynamics patterns that can be used for deceiving keystroke authentication systems.
arXiv Detail & Related papers (2022-12-16T12:45:16Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - On Deep Learning in Password Guessing, a Survey [4.1499725848998965]
This paper compares various deep learning-based password guessing approaches that do not require domain knowledge or assumptions about users' password structures and combinations.
We propose a promising research experimental design on using variations of IWGAN on password guessing under non-targeted offline attacks.
arXiv Detail & Related papers (2022-08-22T15:48:35Z) - Backdoor Attack against Speaker Verification [86.43395230456339]
We show that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data.
We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification.
arXiv Detail & Related papers (2020-10-22T11:10:08Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.