One Size Does not Fit All: Quantifying the Risk of Malicious App
Encounters for Different Android User Profiles
- URL: http://arxiv.org/abs/2301.07346v1
- Date: Wed, 18 Jan 2023 07:31:41 GMT
- Title: One Size Does not Fit All: Quantifying the Risk of Malicious App
Encounters for Different Android User Profiles
- Authors: Savino Dambra, Leyla Bilge, Platon Kotzias, Yun Shen, Juan Caballero
- Abstract summary: We perform a large-scale quantitative analysis of the risk of encountering malware across user communities.
At the core of our study is a dataset of app installation logs collected from 12M Android mobile devices.
Our results confirm the inadequacy of one-size-fits-all protection solutions.
- Score: 18.58456177992614
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Previous work has investigated the particularities of security practices
within specific user communities defined based on country of origin, age, prior
tech abuse, and economic status. Their results highlight that current security
solutions that adopt a one-size-fits-all-users approach ignore the differences
and needs of particular user communities. However, those works focus on a
single community or cluster users into hard-to-interpret sub-populations.
In this work, we perform a large-scale quantitative analysis of the risk of
encountering malware and other potentially unwanted applications (PUA) across
user communities. At the core of our study is a dataset of app installation
logs collected from 12M Android mobile devices. Leveraging user-installed apps,
we define intuitive profiles based on users' interests (e.g., gamers and
investors), and fit a subset of 5.4M devices to those profiles. Our analysis is
structured in three parts. First, we perform risk analysis on the whole
population to measure how the risk of malicious app encounters is affected by
different factors. Next, we create different profiles to investigate whether
risk differences across users may be due to their interests. Finally, we
compare a per-profile approach for classifying clean and infected devices with
the classical approach that considers the whole population.
We observe that features such as the diversity of the app signers and the use
of alternative markets highly correlate with the risk of malicious app
encounters. We also discover that some profiles such as gamers and social-media
users are exposed to more than twice the risks experienced by the average
users. We also show that the classification outcome has a marked accuracy
improvement when using a per-profile approach to train the prediction models.
Overall, our results confirm the inadequacy of one-size-fits-all protection
solutions.
Related papers
- Quantifying User Coherence: A Unified Framework for Cross-Domain Recommendation Analysis [69.37718774071793]
This paper introduces novel information-theoretic measures for understanding recommender systems.
We evaluate 7 recommendation algorithms across 9 datasets, revealing the relationships between our measures and standard performance metrics.
arXiv Detail & Related papers (2024-10-03T13:02:07Z) - ECORS: An Ensembled Clustering Approach to Eradicate The Local And Global Outlier In Collaborative Filtering Recommender System [0.0]
outlier detection is a key research area in recommender systems.
We propose an approach that addresses these challenges by employing various clustering algorithms.
Our experimental results demonstrate that this approach significantly improves the accuracy of outlier detection in recommender systems.
arXiv Detail & Related papers (2024-10-01T05:06:07Z) - Measuring Strategization in Recommendation: Users Adapt Their Behavior to Shape Future Content [66.71102704873185]
We test for user strategization by conducting a lab experiment and survey.
We find strong evidence of strategization across outcome metrics, including participants' dwell time and use of "likes"
Our findings suggest that platforms cannot ignore the effect of their algorithms on user behavior.
arXiv Detail & Related papers (2024-05-09T07:36:08Z) - SeGA: Preference-Aware Self-Contrastive Learning with Prompts for
Anomalous User Detection on Twitter [14.483830120541894]
We propose SeGA, preference-aware self-contrastive learning for anomalous user detection.
SeGA uses large language models to summarize user preferences via posts.
We empirically validate the effectiveness of the model design and pre-training strategies.
arXiv Detail & Related papers (2023-12-17T05:35:28Z) - User Inference Attacks on Large Language Models [26.616016510555088]
Fine-tuning is a common and effective method for tailoring large language models (LLMs) to specialized tasks and applications.
We study the privacy implications of fine-tuning LLMs on user data.
arXiv Detail & Related papers (2023-10-13T17:24:52Z) - Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona
Biases in Dialogue Systems [103.416202777731]
We study "persona biases", which we define to be the sensitivity of dialogue models' harmful behaviors contingent upon the personas they adopt.
We categorize persona biases into biases in harmful expression and harmful agreement, and establish a comprehensive evaluation framework to measure persona biases in five aspects: Offensiveness, Toxic Continuation, Regard, Stereotype Agreement, and Toxic Agreement.
arXiv Detail & Related papers (2023-10-08T21:03:18Z) - A Study on Accuracy, Miscalibration, and Popularity Bias in
Recommendations [6.694971161661218]
We study how different genres affect the inconsistency of recommendation performance.
We find that users with little interest in popular content receive the worst recommendation accuracy.
Our experiments show that particular genres contribute to a different extent to the inconsistency of recommendation performance.
arXiv Detail & Related papers (2023-03-01T10:39:58Z) - Towards a Fair Comparison and Realistic Design and Evaluation Framework
of Android Malware Detectors [63.75363908696257]
We analyze 10 influential research works on Android malware detection using a common evaluation framework.
We identify five factors that, if not taken into account when creating datasets and designing detectors, significantly affect the trained ML models.
We conclude that the studied ML-based detectors have been evaluated optimistically, which justifies the good published results.
arXiv Detail & Related papers (2022-05-25T08:28:08Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - Causal Disentanglement with Network Information for Debiased
Recommendations [34.698181166037564]
Recent research proposes to debias by modeling a recommender system from a causal perspective.
The critical challenge in this setting is accounting for the hidden confounders.
We propose to leverage network information (i.e., user-social and user-item networks) to better approximate hidden confounders.
arXiv Detail & Related papers (2022-04-14T20:55:11Z) - Personalized multi-faceted trust modeling to determine trust links in
social media and its potential for misinformation management [61.88858330222619]
We present an approach for predicting trust links between peers in social media.
We propose a data-driven multi-faceted trust modeling which incorporates many distinct features for a comprehensive analysis.
Illustrated in a trust-aware item recommendation task, we evaluate the proposed framework in the context of a large Yelp dataset.
arXiv Detail & Related papers (2021-11-11T19:40:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.