Exposing and Addressing Security Vulnerabilities in Browser Text Input
Fields
- URL: http://arxiv.org/abs/2308.16321v1
- Date: Wed, 30 Aug 2023 21:02:48 GMT
- Title: Exposing and Addressing Security Vulnerabilities in Browser Text Input
Fields
- Authors: Asmit Nayak, Rishabh Khandelwal, Kassem Fawaz
- Abstract summary: We perform a comprehensive analysis of the security of text input fields in web browsers.
We find that browsers' coarse-grained permission model violates two security design principles.
We uncover two vulnerabilities in input fields, including the alarming discovery of passwords in plaintext.
- Score: 22.717150034358948
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this work, we perform a comprehensive analysis of the security of text
input fields in web browsers. We find that browsers' coarse-grained permission
model violates two security design principles: least privilege and complete
mediation. We further uncover two vulnerabilities in input fields, including
the alarming discovery of passwords in plaintext within the HTML source code of
the web page. To demonstrate the real-world impact of these vulnerabilities, we
design a proof-of-concept extension, leveraging techniques from static and
dynamic code injection attacks to bypass the web store review process. Our
measurements and case studies reveal that these vulnerabilities are prevalent
across various websites, with sensitive user information, such as passwords,
exposed in the HTML source code of even high-traffic sites like Google and
Cloudflare. We find that a significant percentage (12.5\%) of extensions
possess the necessary permissions to exploit these vulnerabilities and identify
190 extensions that directly access password fields. Finally, we propose two
countermeasures to address these risks: a bolt-on JavaScript package for
immediate adoption by website developers allowing them to protect sensitive
input fields, and a browser-level solution that alerts users when an extension
accesses sensitive input fields. Our research highlights the urgent need for
improved security measures to protect sensitive user information online.
Related papers
- Dancer in the Dark: Synthesizing and Evaluating Polyglots for Blind Cross-Site Scripting [10.696934248458136]
Cross-Site Scripting (XSS) is a prevalent and well known security problem in web applications.
We present the first comprehensive study on blind XSS (BXSS)
We develop a method for synthesizing polyglots, small XSS payloads that execute in all common injection contexts.
arXiv Detail & Related papers (2025-02-12T15:02:30Z) - Illusions of Relevance: Using Content Injection Attacks to Deceive Retrievers, Rerankers, and LLM Judges [52.96987928118327]
We find that embedding models for retrieval, rerankers, and large language model (LLM) relevance judges are vulnerable to content injection attacks.
We identify two primary threats: (1) inserting unrelated or harmful content within passages that still appear deceptively "relevant", and (2) inserting entire queries or key query terms into passages to boost their perceived relevance.
Our study systematically examines the factors that influence an attack's success, such as the placement of injected content and the balance between relevant and non-relevant material.
arXiv Detail & Related papers (2025-01-30T18:02:15Z) - FATH: Authentication-based Test-time Defense against Indirect Prompt Injection Attacks [45.65210717380502]
Large language models (LLMs) have been widely deployed as the backbone with additional tools and text information for real-world applications.
prompt injection attacks are particularly threatening, where malicious instructions injected in the external text information can exploit LLMs to generate answers as the attackers desire.
This paper introduces a novel test-time defense strategy, named AuThentication with Hash-based tags (FATH)
arXiv Detail & Related papers (2024-10-28T20:02:47Z) - Exploiting Leakage in Password Managers via Injection Attacks [16.120271337898235]
This work explores injection attacks against password managers.
In this setting, the adversary controls their own application client, which they use to "inject" chosen payloads to a victim's client via, for example, sharing credentials with them.
arXiv Detail & Related papers (2024-08-13T17:45:12Z) - Exploring Vulnerabilities and Protections in Large Language Models: A Survey [1.6179784294541053]
This survey examines the security challenges of Large Language Models (LLMs)
It focuses on two main areas: Prompt Hacking and Adversarial Attacks.
By detailing these security issues, the survey contributes to the broader discussion on creating resilient AI systems.
arXiv Detail & Related papers (2024-06-01T00:11:09Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - LLMs in Web Development: Evaluating LLM-Generated PHP Code Unveiling Vulnerabilities and Limitations [0.0]
This study evaluates the security of web application code generated by Large Language Models, analyzing 2,500 GPT-4 generated PHP websites.
Our investigation focuses on identifying Insecure File Upload,sql Injection, Stored XSS, and Reflected XSS in GPT-4 generated PHP code.
According to Burp's Scan, 11.56% of the sites can be straight out compromised. Adding static scan results, 26% had at least one vulnerability that can be exploited through web interaction.
arXiv Detail & Related papers (2024-04-21T20:56:02Z) - Passwords Are Meant to Be Secret: A Practical Secure Password Entry Channel for Web Browsers [7.049738935364298]
Malicious client-side scripts and browser extensions can steal passwords after they have been autofilled by the manager into the web page.
This paper explores what role the password manager can take in preventing the theft of autofilled credentials without requiring a change to user behavior.
arXiv Detail & Related papers (2024-02-09T03:21:14Z) - Benchmarking and Defending Against Indirect Prompt Injection Attacks on Large Language Models [79.0183835295533]
We introduce the first benchmark for indirect prompt injection attacks, named BIPIA, to assess the risk of such vulnerabilities.
Our analysis identifies two key factors contributing to their success: LLMs' inability to distinguish between informational context and actionable instructions, and their lack of awareness in avoiding the execution of instructions within external content.
We propose two novel defense mechanisms-boundary awareness and explicit reminder-to address these vulnerabilities in both black-box and white-box settings.
arXiv Detail & Related papers (2023-12-21T01:08:39Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z) - Reinforcement Learning on Encrypted Data [58.39270571778521]
We present a preliminary, experimental study of how a DQN agent trained on encrypted states performs in environments with discrete and continuous state spaces.
Our results highlight that the agent is still capable of learning in small state spaces even in presence of non-deterministic encryption, but performance collapses in more complex environments.
arXiv Detail & Related papers (2021-09-16T21:59:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.