Thwart Me If You Can: An Empirical Analysis of Android Platform Armoring Against Stalkerware
- URL: http://arxiv.org/abs/2508.02454v1
- Date: Mon, 04 Aug 2025 14:18:45 GMT
- Title: Thwart Me If You Can: An Empirical Analysis of Android Platform Armoring Against Stalkerware
- Authors: Malvika Jadhav, Wenxuan Bao, Vincent Bindschaedler,
- Abstract summary: Stalkerware is a serious threat to individuals' privacy that is receiving increased attention from the security and privacy research communities.<n>We perform a systematic analysis of a large corpus of recent Android stalkerware apps.<n>Our investigation reveals new insights into tactics used by stalkerware and may inspire alternative defense strategies.
- Score: 6.427108592578506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stalkerware is a serious threat to individuals' privacy that is receiving increased attention from the security and privacy research communities. Existing works have largely focused on studying leading stalkerware apps, dual-purpose apps, monetization of stalkerware, or the experience of survivors. However, there remains a need to understand potential defenses beyond the detection-and-removal approach, which may not necessarily be effective in the context of stalkerware. In this paper, we perform a systematic analysis of a large corpus of recent Android stalkerware apps. We combine multiple analysis techniques to quantify stalkerware behaviors and capabilities and how these evolved over time. Our primary goal is understanding: how (and whether) recent Android platform changes -- largely designed to improve user privacy -- have thwarted stalkerware functionality; how stalkerware may have adapted as a result; and what we may conclude about potential defenses. Our investigation reveals new insights into tactics used by stalkerware and may inspire alternative defense strategies.
Related papers
- An In-kernel Forensics Engine for Investigating Evasive Attacks [0.28894038270224864]
This paper introduces LASE, an open-source Low-Artifact Forensics Engine to perform threat analysis and forensics in Windows operating system.<n>LASE augments current analysis tools by providing detailed, system-wide monitoring capabilities while minimizing detectable artifacts.
arXiv Detail & Related papers (2025-05-10T03:40:17Z) - A Computer Vision Based Approach for Stalking Detection Using a
CNN-LSTM-MLP Hybrid Fusion Model [1.0691590188849427]
Stalking in public places has become a common occurrence with women being the most affected.
It has become a necessity to detect stalking as all of these criminal activities can be stopped through stalking detection.
In this research, we propose a novel deep learning-based hybrid fusion model to detect potential stalkers from a single video.
arXiv Detail & Related papers (2024-02-05T18:53:54Z) - Can you See me? On the Visibility of NOPs against Android Malware Detectors [1.2187048691454239]
This paper proposes a visibility metric that assesses the difficulty in spotting NOPs and similar non-operational codes.
We tested our metric on a state-of-the-art, opcode-based deep learning system for Android malware detection.
arXiv Detail & Related papers (2023-12-28T20:48:16Z) - Stop Following Me! Evaluating the Effectiveness of Anti-Stalking Features of Personal Item Tracking Devices [4.604003661048267]
Personal item tracking devices are popular for locating lost items such as keys, wallets, and suitcases.
They are now being abused by stalkers and domestic abusers to track their victims' location over time.
Some device manufacturers created anti-stalking features' in response, and later improved on them after criticism that they were insufficient.
We analyse the effectiveness of the anti-stalking features with five brands of tracking devices through a gamified quasi-experiment in collaboration with the Assassins' Guild student society.
arXiv Detail & Related papers (2023-12-12T10:51:50Z) - Robust Recommender System: A Survey and Future Directions [58.87305602959857]
We first present a taxonomy to organize current techniques for withstanding malicious attacks and natural noise.<n>We then explore state-of-the-art methods in each category, including fraudster detection, adversarial training, certifiable robust training for defending against malicious attacks.<n>We discuss robustness across varying recommendation scenarios and its interplay with other properties like accuracy, interpretability, privacy, and fairness.
arXiv Detail & Related papers (2023-09-05T08:58:46Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - AirGuard -- Protecting Android Users From Stalking Attacks By Apple Find
My Devices [78.08346367878578]
We reverse engineer Apple's tracking protection in iOS and discuss its features regarding stalking detection.
We design "AirGuard" and release it as an Android app to protect against abuse by Apple tracking devices.
arXiv Detail & Related papers (2022-02-23T22:31:28Z) - Few-Shot Backdoor Attacks on Visual Object Tracking [80.13936562708426]
Visual object tracking (VOT) has been widely adopted in mission-critical applications, such as autonomous driving and intelligent surveillance systems.
We show that an adversary can easily implant hidden backdoors into VOT models by tempering with the training process.
We show that our attack is resistant to potential defenses, highlighting the vulnerability of VOT models to potential backdoor attacks.
arXiv Detail & Related papers (2022-01-31T12:38:58Z) - Brief View and Analysis to Latest Android Security Issues and Approaches [0.0]
We conduct a wide range of analysis, including latest malwares, Android security features, and approaches.
We also provide some finding when we are gathering information and carrying on experiments.
arXiv Detail & Related papers (2021-09-02T09:34:11Z) - Backdoor Learning: A Survey [75.59571756777342]
Backdoor attack intends to embed hidden backdoor into deep neural networks (DNNs)
Backdoor learning is an emerging and rapidly growing research area.
This paper presents the first comprehensive survey of this realm.
arXiv Detail & Related papers (2020-07-17T04:09:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.