An Exploration Into Web Session Security- A Systematic Literature Review
- URL: http://arxiv.org/abs/2310.10687v1
- Date: Sat, 14 Oct 2023 16:22:07 GMT
- Title: An Exploration Into Web Session Security- A Systematic Literature Review
- Authors: Md. Imtiaz Habib, Abdullah Al Maruf, Md. Jobair Ahmed Nabil
- Abstract summary: The most common attacks against web sessions are reviewed in this paper, for example, some attacks against web browsers' honest users attempting to create session with trusted web browser application legally.
We have assessed with four different ways to judge the viability of a certain solution by reviewing existing security solutions which prevent or halt the different attacks.
The guidelines we have identified will be helpful for the creative solutions proceeding web security in a more structured and holistic way.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The most common attacks against web sessions are reviewed in this paper, for
example, some attacks against web browsers' honest users attempting to create
session with trusted web browser application legally. We have assessed with
four different ways to judge the viability of a certain solution by reviewing
existing security solutions which prevent or halt the different attacks. Then
we have pointed out some guidelines that have been taken into account by the
designers of the proposals we reviewed. The guidelines we have identified will
be helpful for the creative solutions proceeding web security in a more
structured and holistic way.
Related papers
- OpenAgentSafety: A Comprehensive Framework for Evaluating Real-World AI Agent Safety [58.201189860217724]
We introduce OpenAgentSafety, a comprehensive framework for evaluating agent behavior across eight critical risk categories.<n>Unlike prior work, our framework evaluates agents that interact with real tools, including web browsers, code execution environments, file systems, bash shells, and messaging platforms.<n>It combines rule-based analysis with LLM-as-judge assessments to detect both overt and subtle unsafe behaviors.
arXiv Detail & Related papers (2025-07-08T16:18:54Z) - VPI-Bench: Visual Prompt Injection Attacks for Computer-Use Agents [74.6761188527948]
Computer-Use Agents (CUAs) with full system access pose significant security and privacy risks.<n>We investigate Visual Prompt Injection (VPI) attacks, where malicious instructions are visually embedded within rendered user interfaces.<n>Our empirical study shows that current CUAs and BUAs can be deceived at rates of up to 51% and 100%, respectively, on certain platforms.
arXiv Detail & Related papers (2025-06-03T05:21:50Z) - The Hidden Dangers of Browsing AI Agents [0.0]
This paper presents a comprehensive security evaluation of such agents, focusing on systemic vulnerabilities across multiple architectural layers.<n>Our work outlines the first end-to-end threat model for browsing agents and provides actionable guidance for securing their deployment in real-world environments.
arXiv Detail & Related papers (2025-05-19T13:10:29Z) - Browser Security Posture Analysis: A Client-Side Security Assessment Framework [0.0]
This paper presents a browser-based client-side security assessment toolkit that runs entirely in JavaScript and WebAssembly within the browser.<n>It performs a battery of over 120 in-browser security tests in situ, providing fine-grained diagnostics of security policies and features that network-level or os-level tools cannot observe.<n>We discuss the security and privacy implications of our findings, compare with related work in browser security and enterprise endpoint solutions, and outline future enhancements such as real-time posture monitoring and SIEM integration.
arXiv Detail & Related papers (2025-05-12T20:38:19Z) - SafeArena: Evaluating the Safety of Autonomous Web Agents [65.49740046281116]
LLM-based agents are becoming increasingly proficient at solving web-based tasks.
With this capability comes a greater risk of misuse for malicious purposes.
We propose SafeArena, the first benchmark to focus on the deliberate misuse of web agents.
arXiv Detail & Related papers (2025-03-06T20:43:14Z) - Illusions of Relevance: Using Content Injection Attacks to Deceive Retrievers, Rerankers, and LLM Judges [52.96987928118327]
We find that embedding models for retrieval, rerankers, and large language model (LLM) relevance judges are vulnerable to content injection attacks.
We identify two primary threats: (1) inserting unrelated or harmful content within passages that still appear deceptively "relevant", and (2) inserting entire queries or key query terms into passages to boost their perceived relevance.
Our study systematically examines the factors that influence an attack's success, such as the placement of injected content and the balance between relevant and non-relevant material.
arXiv Detail & Related papers (2025-01-30T18:02:15Z) - WebAssembly and Security: a review [0.8962460460173961]
We analyze 121 papers by identifying seven different security categories.
We aim to fill this gap by proposing a comprehensive review of research works dealing with security in WebAssembly.
arXiv Detail & Related papers (2024-07-17T03:37:28Z) - Evaluating Google's Protected Audience Protocol [7.737740676767729]
Google has proposed the Privacy Sandbox initiative to enable ad targeting without third-party cookies.
This work focuses on analyzing linkage privacy risks for the reporting mechanisms proposed in the Protected Audience proposal.
arXiv Detail & Related papers (2024-05-13T18:28:56Z) - SoK: Analysis techniques for WebAssembly [0.0]
WebAssembly is a low-level bytecode language that allows languages like C, C++, and Rust to be executed in the browser at near-native performance.
Vulnerabilities in memory-unsafe languages, like C and C++, can translate into vulnerabilities in WebAssembly binaries.
WebAssembly has been used for malicious purposes like cryptojacking.
arXiv Detail & Related papers (2024-01-11T14:28:13Z) - Poisoning Retrieval Corpora by Injecting Adversarial Passages [79.14287273842878]
We propose a novel attack for dense retrieval systems in which a malicious user generates a small number of adversarial passages.
When these adversarial passages are inserted into a large retrieval corpus, we show that this attack is highly effective in fooling these systems.
We also benchmark and compare a range of state-of-the-art dense retrievers, both unsupervised and supervised.
arXiv Detail & Related papers (2023-10-29T21:13:31Z) - "Make Them Change it Every Week!": A Qualitative Exploration of Online Developer Advice on Usable and Secure Authentication [21.58767421554059]
We aim to understand the accessibility and quality of online advice and provide insights into how online advice might contribute to (in)secure and (un)usable authentication.
Based on a survey with 18 professional web developers, we obtained 406 documents and qualitatively analyzed 272 contained pieces of advice in depth.
The most common advice is for password-based authentication, but little for more modern alternatives.
arXiv Detail & Related papers (2023-09-01T21:41:23Z) - A Novel Approach To User Agent String Parsing For Vulnerability Analysis
Using Mutli-Headed Attention [3.3029515721630855]
A novel methodology for parsing UASs using Multi-Headed Attention Based transformers is proposed.
The proposed methodology exhibits strong performance in parsing a variety of UASs with differing formats.
A framework to utilize parsed UASs to estimate the vulnerability scores for large sections of publicly visible IT networks or regions is also discussed.
arXiv Detail & Related papers (2023-06-06T14:49:25Z) - Robust Physical-World Attacks on Face Recognition [52.403564953848544]
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs)
Recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition.
We study sticker-based physical attacks on face recognition for better understanding its adversarial robustness.
arXiv Detail & Related papers (2021-09-20T06:49:52Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - On the Social and Technical Challenges of Web Search Autosuggestion
Moderation [118.47867428272878]
Autosuggestions are typically generated by machine learning (ML) systems trained on a corpus of search logs and document representations.
While current search engines have become increasingly proficient at suppressing such problematic suggestions, there are still persistent issues that remain.
We discuss several dimensions of problematic suggestions, difficult issues along the pipeline, and why our discussion applies to the increasing number of applications beyond web search.
arXiv Detail & Related papers (2020-07-09T19:22:00Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.