Exposing and Addressing Security Vulnerabilities in Browser Text Input
Fields
- URL: http://arxiv.org/abs/2308.16321v1
- Date: Wed, 30 Aug 2023 21:02:48 GMT
- Title: Exposing and Addressing Security Vulnerabilities in Browser Text Input
Fields
- Authors: Asmit Nayak, Rishabh Khandelwal, Kassem Fawaz
- Abstract summary: We perform a comprehensive analysis of the security of text input fields in web browsers.
We find that browsers' coarse-grained permission model violates two security design principles.
We uncover two vulnerabilities in input fields, including the alarming discovery of passwords in plaintext.
- Score: 22.717150034358948
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this work, we perform a comprehensive analysis of the security of text
input fields in web browsers. We find that browsers' coarse-grained permission
model violates two security design principles: least privilege and complete
mediation. We further uncover two vulnerabilities in input fields, including
the alarming discovery of passwords in plaintext within the HTML source code of
the web page. To demonstrate the real-world impact of these vulnerabilities, we
design a proof-of-concept extension, leveraging techniques from static and
dynamic code injection attacks to bypass the web store review process. Our
measurements and case studies reveal that these vulnerabilities are prevalent
across various websites, with sensitive user information, such as passwords,
exposed in the HTML source code of even high-traffic sites like Google and
Cloudflare. We find that a significant percentage (12.5\%) of extensions
possess the necessary permissions to exploit these vulnerabilities and identify
190 extensions that directly access password fields. Finally, we propose two
countermeasures to address these risks: a bolt-on JavaScript package for
immediate adoption by website developers allowing them to protect sensitive
input fields, and a browser-level solution that alerts users when an extension
accesses sensitive input fields. Our research highlights the urgent need for
improved security measures to protect sensitive user information online.
Related papers
- User Profiles: The Achilles' Heel of Web Browsers [12.5263811476743]
We show that, except for the Tor Browser, all modern browsers store sensitive data in home directories with little to no integrity or confidentiality controls.
We show that security measures like password and cookie encryption can be easily bypassed.
HTTPS can be fully bypassed with the deployment of custom potentially malicious root certificates.
arXiv Detail & Related papers (2025-04-24T16:01:48Z) - Tit-for-Tat: Safeguarding Large Vision-Language Models Against Jailbreak Attacks via Adversarial Defense [90.71884758066042]
Large vision-language models (LVLMs) introduce a unique vulnerability: susceptibility to malicious attacks via visual inputs.
We propose ESIII (Embedding Security Instructions Into Images), a novel methodology for transforming the visual space from a source of vulnerability into an active defense mechanism.
arXiv Detail & Related papers (2025-03-14T17:39:45Z) - WAFFLED: Exploiting Parsing Discrepancies to Bypass Web Application Firewalls [4.051306574166042]
Evading Web Application Firewalls (WAFs) can compromise defenses.
We present an innovative approach to bypassing WAFs by uncovering parsing discrepancies.
We identified and confirmed 1207 bypasses across 5 well-known WAFs.
arXiv Detail & Related papers (2025-03-13T19:56:29Z) - A Study on Malicious Browser Extensions in 2025 [0.3749861135832073]
This paper examines the evolving threat landscape of malicious browser extensions in 2025, focusing on Mozilla Firefox and Chrome.
Our research successfully bypassed security mechanisms of Firefox and Chrome, demonstrating that malicious extensions can still be developed, published, and executed within the Mozilla Add-ons Store and Chrome Web Store.
arXiv Detail & Related papers (2025-03-06T10:24:27Z) - Dancer in the Dark: Synthesizing and Evaluating Polyglots for Blind Cross-Site Scripting [10.696934248458136]
Cross-Site Scripting (XSS) is a prevalent and well known security problem in web applications.
We present the first comprehensive study on blind XSS (BXSS)
We develop a method for synthesizing polyglots, small XSS payloads that execute in all common injection contexts.
arXiv Detail & Related papers (2025-02-12T15:02:30Z) - FATH: Authentication-based Test-time Defense against Indirect Prompt Injection Attacks [45.65210717380502]
Large language models (LLMs) have been widely deployed as the backbone with additional tools and text information for real-world applications.
prompt injection attacks are particularly threatening, where malicious instructions injected in the external text information can exploit LLMs to generate answers as the attackers desire.
This paper introduces a novel test-time defense strategy, named AuThentication with Hash-based tags (FATH)
arXiv Detail & Related papers (2024-10-28T20:02:47Z) - EIA: Environmental Injection Attack on Generalist Web Agents for Privacy Leakage [40.82238259404402]
We conduct the first study on the privacy risks of generalist web agents in adversarial environments.
First, we present a realistic threat model for attacks on the website, where we consider two adversarial targets: stealing users' specific PII or the entire user request.
We collect 177 action steps that involve diverse PII categories on realistic websites from the Mind2Web, and conduct experiments using one of the most capable generalist web agent frameworks to date.
arXiv Detail & Related papers (2024-09-17T15:49:44Z) - Protecting Onion Service Users Against Phishing [1.6435014180036467]
Phishing websites are a common phenomenon among Tor onion services.
phishers exploit that it is tremendously difficult to distinguish phishing from authentic onion domain names.
Operators of onion services devised several strategies to protect their users against phishing.
None protect users against phishing without producing traces about visited services.
arXiv Detail & Related papers (2024-08-14T19:51:30Z) - Exploiting Leakage in Password Managers via Injection Attacks [16.120271337898235]
This work explores injection attacks against password managers.
In this setting, the adversary controls their own application client, which they use to "inject" chosen payloads to a victim's client via, for example, sharing credentials with them.
arXiv Detail & Related papers (2024-08-13T17:45:12Z) - Exploring Vulnerabilities and Protections in Large Language Models: A Survey [1.6179784294541053]
This survey examines the security challenges of Large Language Models (LLMs)
It focuses on two main areas: Prompt Hacking and Adversarial Attacks.
By detailing these security issues, the survey contributes to the broader discussion on creating resilient AI systems.
arXiv Detail & Related papers (2024-06-01T00:11:09Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - LLMs in Web Development: Evaluating LLM-Generated PHP Code Unveiling Vulnerabilities and Limitations [0.0]
This study evaluates the security of web application code generated by Large Language Models, analyzing 2,500 GPT-4 generated PHP websites.
Our investigation focuses on identifying Insecure File Upload,sql Injection, Stored XSS, and Reflected XSS in GPT-4 generated PHP code.
According to Burp's Scan, 11.56% of the sites can be straight out compromised. Adding static scan results, 26% had at least one vulnerability that can be exploited through web interaction.
arXiv Detail & Related papers (2024-04-21T20:56:02Z) - Passwords Are Meant to Be Secret: A Practical Secure Password Entry Channel for Web Browsers [7.049738935364298]
Malicious client-side scripts and browser extensions can steal passwords after they have been autofilled by the manager into the web page.
This paper explores what role the password manager can take in preventing the theft of autofilled credentials without requiring a change to user behavior.
arXiv Detail & Related papers (2024-02-09T03:21:14Z) - Benchmarking and Defending Against Indirect Prompt Injection Attacks on Large Language Models [79.0183835295533]
We introduce the first benchmark for indirect prompt injection attacks, named BIPIA, to assess the risk of such vulnerabilities.
Our analysis identifies two key factors contributing to their success: LLMs' inability to distinguish between informational context and actionable instructions, and their lack of awareness in avoiding the execution of instructions within external content.
We propose two novel defense mechanisms-boundary awareness and explicit reminder-to address these vulnerabilities in both black-box and white-box settings.
arXiv Detail & Related papers (2023-12-21T01:08:39Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z) - Detecting Backdoors in Deep Text Classifiers [43.36440869257781]
We present the first robust defence mechanism that generalizes to several backdoor attacks against text classification models.
Our technique is highly accurate at defending against state-of-the-art backdoor attacks, including data poisoning and weight poisoning.
arXiv Detail & Related papers (2022-10-11T07:48:03Z) - Reinforcement Learning on Encrypted Data [58.39270571778521]
We present a preliminary, experimental study of how a DQN agent trained on encrypted states performs in environments with discrete and continuous state spaces.
Our results highlight that the agent is still capable of learning in small state spaces even in presence of non-deterministic encryption, but performance collapses in more complex environments.
arXiv Detail & Related papers (2021-09-16T21:59:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.