LLMs on support of privacy and security of mobile apps: state of the art and research directions
- URL: http://arxiv.org/abs/2506.11679v2
- Date: Mon, 07 Jul 2025 17:36:57 GMT
- Title: LLMs on support of privacy and security of mobile apps: state of the art and research directions
- Authors: Tran Thanh Lam Nguyen, Barbara Carminati, Elena Ferrari,
- Abstract summary: Security and privacy risks still threaten users of mobile apps.<n>We explore the application of Large Language Models to identify security risks and privacy violations.<n>We present an approach to detect sensitive data leakage when users share images online.
- Score: 1.5293427903448022
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern life has witnessed the explosion of mobile devices. However, besides the valuable features that bring convenience to end users, security and privacy risks still threaten users of mobile apps. The increasing sophistication of these threats in recent years has underscored the need for more advanced and efficient detection approaches. In this chapter, we explore the application of Large Language Models (LLMs) to identify security risks and privacy violations and mitigate them for the mobile application ecosystem. By introducing state-of-the-art research that applied LLMs to mitigate the top 10 common security risks of smartphone platforms, we highlight the feasibility and potential of LLMs to replace traditional analysis methods, such as dynamic and hybrid analysis of mobile apps. As a representative example of LLM-based solutions, we present an approach to detect sensitive data leakage when users share images online, a common behavior of smartphone users nowadays. Finally, we discuss open research challenges.
Related papers
- Large AI Model-Enabled Secure Communications in Low-Altitude Wireless Networks: Concepts, Perspectives and Case Study [92.15255222408636]
Low-altitude wireless networks (LAWNs) have the potential to revolutionize communications by supporting a range of applications.<n>We investigate some large artificial intelligence model (LAM)-enabled solutions for secure communications in LAWNs.<n>To demonstrate the practical benefits of LAMs for secure communications in LAWNs, we propose a novel LAM-based optimization framework.
arXiv Detail & Related papers (2025-08-01T01:53:58Z) - A Survey on Privacy Risks and Protection in Large Language Models [13.602836059584682]
Large Language Models (LLMs) have become increasingly integral to diverse applications, raising privacy concerns.<n>This survey offers a comprehensive overview of privacy risks associated with LLMs and examines current solutions to mitigate these challenges.
arXiv Detail & Related papers (2025-05-04T03:04:07Z) - LLMs in Mobile Apps: Practices, Challenges, and Opportunities [4.104646810514711]
The integration of AI techniques has become increasingly popular in software development.<n>With the rise of large language models (LLMs) and generative AI, developers now have access to a wealth of high-quality open-source models and APIs from closed-source providers.
arXiv Detail & Related papers (2025-02-21T19:53:43Z) - Navigating the Risks: A Survey of Security, Privacy, and Ethics Threats in LLM-Based Agents [67.07177243654485]
This survey collects and analyzes the different threats faced by large language models-based agents.
We identify six key features of LLM-based agents, based on which we summarize the current research progress.
We select four representative agents as case studies to analyze the risks they may face in practical use.
arXiv Detail & Related papers (2024-11-14T15:40:04Z) - The Emerged Security and Privacy of LLM Agent: A Survey with Case Studies [43.65655064122938]
Large Language Models (LLMs) agents have evolved to perform complex tasks.
The widespread applications of LLM agents demonstrate their significant commercial value.
However, they also expose security and privacy vulnerabilities.
This survey aims to provide a comprehensive overview of the newly emerged privacy and security issues faced by LLM agents.
arXiv Detail & Related papers (2024-07-28T00:26:24Z) - "Glue pizza and eat rocks" -- Exploiting Vulnerabilities in Retrieval-Augmented Generative Models [74.05368440735468]
Retrieval-Augmented Generative (RAG) models enhance Large Language Models (LLMs)
In this paper, we demonstrate a security threat where adversaries can exploit the openness of these knowledge bases.
arXiv Detail & Related papers (2024-06-26T05:36:23Z) - Identifying and Mitigating Vulnerabilities in LLM-Integrated
Applications [37.316238236750415]
Large language models (LLMs) are increasingly deployed as the service backend for LLM-integrated applications.
In this work, we consider a setup where the user and LLM interact via an LLM-integrated application in the middle.
We identify potential vulnerabilities that can originate from the malicious application developer or from an outsider threat.
We develop a lightweight, threat-agnostic defense that mitigates both insider and outsider threats.
arXiv Detail & Related papers (2023-11-07T20:13:05Z) - Privacy in Large Language Models: Attacks, Defenses and Future Directions [84.73301039987128]
We analyze the current privacy attacks targeting large language models (LLMs) and categorize them according to the adversary's assumed capabilities.
We present a detailed overview of prominent defense strategies that have been developed to counter these privacy attacks.
arXiv Detail & Related papers (2023-10-16T13:23:54Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.