Passwords and FIDO2 Are Meant To Be Secret: A Practical Secure Authentication Channel for Web Browsers
- URL: http://arxiv.org/abs/2509.02289v2
- Date: Wed, 15 Oct 2025 05:06:14 GMT
- Title: Passwords and FIDO2 Are Meant To Be Secret: A Practical Secure Authentication Channel for Web Browsers
- Authors: Anuj Gautam, Tarun Yadav, Garrett Smith, Kent Seamons, Scott Ruoti,
- Abstract summary: Malicious client-side scripts and browser extensions can steal passwords after the password manager has autofilled them into the web page.<n>We implement our design in the Firefox browser and conduct experiments demonstrating that our defense successfully protects passwords from XSS attacks and malicious extensions.<n>Next, we generalize our design, creating a second defense that prevents recently discovered local attacks against the FIDO2 protocols.
- Score: 3.3317711401366163
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Password managers provide significant security benefits to users. However, malicious client-side scripts and browser extensions can steal passwords after the manager has autofilled them into the web page. In this paper, we extend prior work by Stock and Johns, showing how password autofill can be hardened to prevent these local attacks. We implement our design in the Firefox browser and conduct experiments demonstrating that our defense successfully protects passwords from XSS attacks and malicious extensions. We also show that our implementation is compatible with 97% of the Alexa top 1000 websites. Next, we generalize our design, creating a second defense that prevents recently discovered local attacks against the FIDO2 protocols. We implement this second defense into Firefox, demonstrating that it protects the FIDO2 protocol against XSS attacks and malicious extensions. This defense is compatible with all websites, though it does require a small change (2-3 lines) to web servers implementing FIDO2.
Related papers
- The Attacker Moves Second: Stronger Adaptive Attacks Bypass Defenses Against Llm Jailbreaks and Prompt Injections [74.60337113759313]
Current defenses against jailbreaks and prompt injections are typically evaluated against a static set of harmful attack strings.<n>We argue that this evaluation process is flawed. Instead, we should evaluate defenses against adaptive attackers who explicitly modify their attack strategy to counter a defense's design.
arXiv Detail & Related papers (2025-10-10T05:51:04Z) - Cuckoo Attack: Stealthy and Persistent Attacks Against AI-IDE [64.47951172662745]
Cuckoo Attack is a novel attack that achieves stealthy and persistent command execution by embedding malicious payloads into configuration files.<n>We formalize our attack paradigm into two stages, including initial infection and persistence.<n>We contribute seven actionable checkpoints for vendors to evaluate their product security.
arXiv Detail & Related papers (2025-09-19T04:10:52Z) - Defending Against Prompt Injection With a Few DefensiveTokens [44.221727642687085]
Large language model (LLM) systems interact with external data to perform complex tasks.<n>By injecting instructions into the data accessed by the system, an attacker can override the initial user task with an arbitrary task directed by the attacker.<n>Test-time defenses, e.g., defensive prompting, have been proposed for system developers to attain security only when needed in a flexible manner.<n>We propose DefensiveToken, a test-time defense with prompt injection comparable to training-time alternatives.
arXiv Detail & Related papers (2025-07-10T17:51:05Z) - User Profiles: The Achilles' Heel of Web Browsers [12.5263811476743]
We show that, except for the Tor Browser, all modern browsers store sensitive data in home directories with little to no integrity or confidentiality controls.<n>We show that security measures like password and cookie encryption can be easily bypassed.<n> HTTPS can be fully bypassed with the deployment of custom potentially malicious root certificates.
arXiv Detail & Related papers (2025-04-24T16:01:48Z) - MELON: Provable Defense Against Indirect Prompt Injection Attacks in AI Agents [60.30753230776882]
LLM agents are vulnerable to indirect prompt injection (IPI) attacks, where malicious tasks embedded in tool-retrieved information can redirect the agent to take unauthorized actions.<n>We present MELON, a novel IPI defense that detects attacks by re-executing the agent's trajectory with a masked user prompt modified through a masking function.
arXiv Detail & Related papers (2025-02-07T18:57:49Z) - Simple But Not Secure: An Empirical Security Analysis of Two-factor Authentication Systems [9.046883991816571]
We propose SE2FA, a vulnerability evaluation framework designed to detect vulnerabilities in 2FA systems.
We analyze the security of 407 2FA systems across popular websites from the Tranco Top 10,000 list.
arXiv Detail & Related papers (2024-11-18T13:08:56Z) - Stealing Trust: Unraveling Blind Message Attacks in Web3 Authentication [6.338839764436795]
This paper investigates the vulnerabilities in the Web3 authentication process and proposes a new type of attack, dubbed blind message attacks.
In blind message attacks, attackers trick users into blindly signing messages from target applications by exploiting users' inability to verify the source of messages.
We have developed Web3AuthChecker, a dynamic detection tool that interacts with Web3 authentication-related APIs to identify vulnerabilities.
arXiv Detail & Related papers (2024-06-01T18:19:47Z) - Nudging Users to Change Breached Passwords Using the Protection Motivation Theory [58.87688846800743]
We draw on the Protection Motivation Theory (PMT) to design nudges that encourage users to change breached passwords.
Our study contributes to PMT's application in security research and provides concrete design implications for improving compromised credential notifications.
arXiv Detail & Related papers (2024-05-24T07:51:15Z) - Passwords Are Meant to Be Secret: A Practical Secure Password Entry Channel for Web Browsers [7.049738935364298]
Malicious client-side scripts and browser extensions can steal passwords after they have been autofilled by the manager into the web page.
This paper explores what role the password manager can take in preventing the theft of autofilled credentials without requiring a change to user behavior.
arXiv Detail & Related papers (2024-02-09T03:21:14Z) - TextGuard: Provable Defense against Backdoor Attacks on Text
Classification [83.94014844485291]
We propose TextGuard, the first provable defense against backdoor attacks on text classification.
In particular, TextGuard divides the (backdoored) training data into sub-training sets, achieved by splitting each training sentence into sub-sentences.
In our evaluation, we demonstrate the effectiveness of TextGuard on three benchmark text classification tasks.
arXiv Detail & Related papers (2023-11-19T04:42:16Z) - Exposing and Addressing Security Vulnerabilities in Browser Text Input
Fields [22.717150034358948]
We perform a comprehensive analysis of the security of text input fields in web browsers.
We find that browsers' coarse-grained permission model violates two security design principles.
We uncover two vulnerabilities in input fields, including the alarming discovery of passwords in plaintext.
arXiv Detail & Related papers (2023-08-30T21:02:48Z) - On Certifying Robustness against Backdoor Attacks via Randomized
Smoothing [74.79764677396773]
We study the feasibility and effectiveness of certifying robustness against backdoor attacks using a recent technique called randomized smoothing.
Our results show the theoretical feasibility of using randomized smoothing to certify robustness against backdoor attacks.
Existing randomized smoothing methods have limited effectiveness at defending against backdoor attacks.
arXiv Detail & Related papers (2020-02-26T19:15:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.