Systematic Analysis of Security and Vulnerabilities in Miniapps
- URL: http://arxiv.org/abs/2311.11382v1
- Date: Sun, 19 Nov 2023 17:47:26 GMT
- Title: Systematic Analysis of Security and Vulnerabilities in Miniapps
- Authors: Yuyang Han, Xu Ji, Zhiqiang Wang, Jianyi Zhang,
- Abstract summary: Miniapps are of great importance in the mobile internet sector.
Miniapps can directly impact compromising the integrity of sensitive data.
This paper proposes a triad threat model focusing on users, servers and attackers to mitigate the security risk of miniapps.
- Score: 12.900904404633957
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The past few years have witnessed a boom of miniapps, as lightweight applications, miniapps are of great importance in the mobile internet sector. Consequently, the security of miniapps can directly impact compromising the integrity of sensitive data, posing a potential threat to user privacy. However, after a thorough review of the various research efforts in miniapp security, we found that their actions in researching the safety of miniapp web interfaces are limited. This paper proposes a triad threat model focusing on users, servers and attackers to mitigate the security risk of miniapps. By following the principle of least privilege and the direction of permission consistency, we design a novel analysis framework for the security risk assessment of miniapps by this model. Then, we analyzed the correlation between the security risk assessment and the threat model associated with the miniapp. This analysis led to identifying potential scopes and categorisations with security risks. In the case study, we identify nine major categories of vulnerability issues, such as SQL injection, logical vulnerabilities and cross-site scripting. We also assessed a total of 50,628 security risk hazards and provided specific examples.
Related papers
- OpenAgentSafety: A Comprehensive Framework for Evaluating Real-World AI Agent Safety [58.201189860217724]
We introduce OpenAgentSafety, a comprehensive framework for evaluating agent behavior across eight critical risk categories.<n>Unlike prior work, our framework evaluates agents that interact with real tools, including web browsers, code execution environments, file systems, bash shells, and messaging platforms.<n>It combines rule-based analysis with LLM-as-judge assessments to detect both overt and subtle unsafe behaviors.
arXiv Detail & Related papers (2025-07-08T16:18:54Z) - SafeMobile: Chain-level Jailbreak Detection and Automated Evaluation for Multimodal Mobile Agents [58.21223208538351]
This work explores the security issues surrounding mobile multimodal agents.<n>It attempts to construct a risk discrimination mechanism by incorporating behavioral sequence information.<n>It also designs an automated assisted assessment scheme based on a large language model.
arXiv Detail & Related papers (2025-07-01T15:10:00Z) - SafeMLRM: Demystifying Safety in Multi-modal Large Reasoning Models [50.34706204154244]
Acquiring reasoning capabilities catastrophically degrades inherited safety alignment.
Certain scenarios suffer 25 times higher attack rates.
Despite tight reasoning-answer safety coupling, MLRMs demonstrate nascent self-correction.
arXiv Detail & Related papers (2025-04-09T06:53:23Z) - Commercial LLM Agents Are Already Vulnerable to Simple Yet Dangerous Attacks [88.84977282952602]
A high volume of recent ML security literature focuses on attacks against aligned large language models (LLMs)
In this paper, we analyze security and privacy vulnerabilities that are unique to LLM agents.
We conduct a series of illustrative attacks on popular open-source and commercial agents, demonstrating the immediate practical implications of their vulnerabilities.
arXiv Detail & Related papers (2025-02-12T17:19:36Z) - Autonomous Identity-Based Threat Segmentation in Zero Trust Architectures [4.169915659794567]
Zero Trust Architectures (ZTA) fundamentally redefine network security by adopting a "trust nothing, verify everything" approach.
This research applies the proposed AI-driven, autonomous, identity-based threat segmentation in ZTA.
arXiv Detail & Related papers (2025-01-10T15:35:02Z) - SafeBench: A Safety Evaluation Framework for Multimodal Large Language Models [75.67623347512368]
We propose toolns, a comprehensive framework designed for conducting safety evaluations of MLLMs.
Our framework consists of a comprehensive harmful query dataset and an automated evaluation protocol.
Based on our framework, we conducted large-scale experiments on 15 widely-used open-source MLLMs and 6 commercial MLLMs.
arXiv Detail & Related papers (2024-10-24T17:14:40Z) - EAIRiskBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [47.69642609574771]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EAIRiskBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - Towards Deep Learning Enabled Cybersecurity Risk Assessment for Microservice Architectures [3.0936354370614607]
CyberWise Predictor is a framework designed for predicting and assessing security risks associated with microservice architectures.
Our framework achieves an average accuracy of 92% in automatically predicting vulnerability metrics for new vulnerabilities.
arXiv Detail & Related papers (2024-03-22T12:42:33Z) - Mapping LLM Security Landscapes: A Comprehensive Stakeholder Risk Assessment Proposal [0.0]
We propose a risk assessment process using tools like the risk rating methodology which is used for traditional systems.
We conduct scenario analysis to identify potential threat agents and map the dependent system components against vulnerability factors.
We also map threats against three key stakeholder groups.
arXiv Detail & Related papers (2024-03-20T05:17:22Z) - A Security Risk Taxonomy for Prompt-Based Interaction With Large Language Models [5.077431021127288]
This paper addresses a gap in current research by focusing on security risks posed by large language models (LLMs)
Our work proposes a taxonomy of security risks along the user-model communication pipeline and categorizes the attacks by target and attack type alongside the commonly used confidentiality, integrity, and availability (CIA) triad.
arXiv Detail & Related papers (2023-11-19T20:22:05Z) - Safety Margins for Reinforcement Learning [53.10194953873209]
We show how to leverage proxy criticality metrics to generate safety margins.
We evaluate our approach on learned policies from APE-X and A3C within an Atari environment.
arXiv Detail & Related papers (2023-07-25T16:49:54Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Do Software Security Practices Yield Fewer Vulnerabilities? [6.6840472845873276]
The goal of this study is to assist practitioners and researchers making informed decisions on which security practices to adopt.
Four security practices were the most important practices influencing vulnerability count.
The number of reported vulnerabilities increased rather than reduced as the aggregate security score of the packages increased.
arXiv Detail & Related papers (2022-10-20T20:04:02Z) - Automated Security Assessment for the Internet of Things [6.690766107366799]
We propose an automated security assessment framework for IoT networks.
Our framework first leverages machine learning and natural language processing to analyze vulnerability descriptions.
This security model automatically assesses the security of the IoT network by capturing potential attack paths.
arXiv Detail & Related papers (2021-09-09T04:42:24Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.