Exposing and Addressing Security Vulnerabilities in Browser Text Input
Fields
- URL: http://arxiv.org/abs/2308.16321v1
- Date: Wed, 30 Aug 2023 21:02:48 GMT
- Title: Exposing and Addressing Security Vulnerabilities in Browser Text Input
Fields
- Authors: Asmit Nayak, Rishabh Khandelwal, Kassem Fawaz
- Abstract summary: We perform a comprehensive analysis of the security of text input fields in web browsers.
We find that browsers' coarse-grained permission model violates two security design principles.
We uncover two vulnerabilities in input fields, including the alarming discovery of passwords in plaintext.
- Score: 22.717150034358948
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this work, we perform a comprehensive analysis of the security of text
input fields in web browsers. We find that browsers' coarse-grained permission
model violates two security design principles: least privilege and complete
mediation. We further uncover two vulnerabilities in input fields, including
the alarming discovery of passwords in plaintext within the HTML source code of
the web page. To demonstrate the real-world impact of these vulnerabilities, we
design a proof-of-concept extension, leveraging techniques from static and
dynamic code injection attacks to bypass the web store review process. Our
measurements and case studies reveal that these vulnerabilities are prevalent
across various websites, with sensitive user information, such as passwords,
exposed in the HTML source code of even high-traffic sites like Google and
Cloudflare. We find that a significant percentage (12.5\%) of extensions
possess the necessary permissions to exploit these vulnerabilities and identify
190 extensions that directly access password fields. Finally, we propose two
countermeasures to address these risks: a bolt-on JavaScript package for
immediate adoption by website developers allowing them to protect sensitive
input fields, and a browser-level solution that alerts users when an extension
accesses sensitive input fields. Our research highlights the urgent need for
improved security measures to protect sensitive user information online.
Related papers
- FATH: Authentication-based Test-time Defense against Indirect Prompt Injection Attacks [45.65210717380502]
Large language models (LLMs) have been widely deployed as the backbone with additional tools and text information for real-world applications.
prompt injection attacks are particularly threatening, where malicious instructions injected in the external text information can exploit LLMs to generate answers as the attackers desire.
This paper introduces a novel test-time defense strategy, named AuThentication with Hash-based tags (FATH)
arXiv Detail & Related papers (2024-10-28T20:02:47Z) - Protecting Onion Service Users Against Phishing [1.6435014180036467]
Phishing websites are a common phenomenon among Tor onion services.
phishers exploit that it is tremendously difficult to distinguish phishing from authentic onion domain names.
Operators of onion services devised several strategies to protect their users against phishing.
None protect users against phishing without producing traces about visited services.
arXiv Detail & Related papers (2024-08-14T19:51:30Z) - Exploiting Leakage in Password Managers via Injection Attacks [16.120271337898235]
This work explores injection attacks against password managers.
In this setting, the adversary controls their own application client, which they use to "inject" chosen payloads to a victim's client via, for example, sharing credentials with them.
arXiv Detail & Related papers (2024-08-13T17:45:12Z) - Exploring Vulnerabilities and Protections in Large Language Models: A Survey [1.6179784294541053]
This survey examines the security challenges of Large Language Models (LLMs)
It focuses on two main areas: Prompt Hacking and Adversarial Attacks.
By detailing these security issues, the survey contributes to the broader discussion on creating resilient AI systems.
arXiv Detail & Related papers (2024-06-01T00:11:09Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - LLMs in Web Development: Evaluating LLM-Generated PHP Code Unveiling Vulnerabilities and Limitations [0.0]
This study evaluates the security of web application code generated by Large Language Models, analyzing 2,500 GPT-4 generated PHP websites.
Our investigation focuses on identifying Insecure File Upload,sql Injection, Stored XSS, and Reflected XSS in GPT-4 generated PHP code.
According to Burp's Scan, 11.56% of the sites can be straight out compromised. Adding static scan results, 26% had at least one vulnerability that can be exploited through web interaction.
arXiv Detail & Related papers (2024-04-21T20:56:02Z) - Passwords Are Meant to Be Secret: A Practical Secure Password Entry Channel for Web Browsers [7.049738935364298]
Malicious client-side scripts and browser extensions can steal passwords after they have been autofilled by the manager into the web page.
This paper explores what role the password manager can take in preventing the theft of autofilled credentials without requiring a change to user behavior.
arXiv Detail & Related papers (2024-02-09T03:21:14Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z) - Detecting Backdoors in Deep Text Classifiers [43.36440869257781]
We present the first robust defence mechanism that generalizes to several backdoor attacks against text classification models.
Our technique is highly accurate at defending against state-of-the-art backdoor attacks, including data poisoning and weight poisoning.
arXiv Detail & Related papers (2022-10-11T07:48:03Z) - Reinforcement Learning on Encrypted Data [58.39270571778521]
We present a preliminary, experimental study of how a DQN agent trained on encrypted states performs in environments with discrete and continuous state spaces.
Our results highlight that the agent is still capable of learning in small state spaces even in presence of non-deterministic encryption, but performance collapses in more complex environments.
arXiv Detail & Related papers (2021-09-16T21:59:37Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.