Detecting and Quantifying Malicious Activity with Simulation-based
Inference
- URL: http://arxiv.org/abs/2110.02483v2
- Date: Thu, 7 Oct 2021 10:56:01 GMT
- Title: Detecting and Quantifying Malicious Activity with Simulation-based
Inference
- Authors: Andrew Gambardella, Bogdan State, Naeemullah Khan, Leo Tsourides,
Philip H. S. Torr, At{\i}l{\i}m G\"une\c{s} Baydin
- Abstract summary: We show experiments in malicious user identification using a model of regular and malicious users interacting with a recommendation algorithm.
We provide a novel simulation-based measure for quantifying the effects of a user or group of users on its dynamics.
- Score: 61.9008166652035
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose the use of probabilistic programming techniques to tackle the
malicious user identification problem in a recommendation algorithm.
Probabilistic programming provides numerous advantages over other techniques,
including but not limited to providing a disentangled representation of how
malicious users acted under a structured model, as well as allowing for the
quantification of damage caused by malicious users. We show experiments in
malicious user identification using a model of regular and malicious users
interacting with a simple recommendation algorithm, and provide a novel
simulation-based measure for quantifying the effects of a user or group of
users on its dynamics.
Related papers
- Quantifying User Coherence: A Unified Framework for Cross-Domain Recommendation Analysis [69.37718774071793]
This paper introduces novel information-theoretic measures for understanding recommender systems.
We evaluate 7 recommendation algorithms across 9 datasets, revealing the relationships between our measures and standard performance metrics.
arXiv Detail & Related papers (2024-10-03T13:02:07Z) - Algorithmic Drift: A Simulation Framework to Study the Effects of Recommender Systems on User Preferences [7.552217586057245]
We propose a simulation framework that mimics user-recommender system interactions in a long-term scenario.
We introduce two novel metrics for quantifying the algorithm's impact on user preferences, specifically in terms of drift over time.
arXiv Detail & Related papers (2024-09-24T21:54:22Z) - SeGA: Preference-Aware Self-Contrastive Learning with Prompts for
Anomalous User Detection on Twitter [14.483830120541894]
We propose SeGA, preference-aware self-contrastive learning for anomalous user detection.
SeGA uses large language models to summarize user preferences via posts.
We empirically validate the effectiveness of the model design and pre-training strategies.
arXiv Detail & Related papers (2023-12-17T05:35:28Z) - R-U-SURE? Uncertainty-Aware Code Suggestions By Maximizing Utility
Across Random User Intents [14.455036827804541]
Large language models show impressive results at predicting structured text such as code, but also commonly introduce errors and hallucinations in their output.
We propose Randomized Utility-driven Synthesis of Uncertain REgions (R-U-SURE)
R-U-SURE is an approach for building uncertainty-aware suggestions based on a decision-theoretic model of goal-conditioned utility.
arXiv Detail & Related papers (2023-03-01T18:46:40Z) - Quantifying Availability and Discovery in Recommender Systems via
Stochastic Reachability [27.21058243752746]
We propose an evaluation procedure based on reachability to quantify the maximum probability of recommending a target piece of content to a user.
reachability can be used to detect biases in the availability of content and diagnose limitations in the opportunities for discovery granted to users.
We demonstrate evaluations of recommendation algorithms trained on large datasets of explicit and implicit ratings.
arXiv Detail & Related papers (2021-06-30T16:18:12Z) - Automated Decision-based Adversarial Attacks [48.01183253407982]
We consider the practical and challenging decision-based black-box adversarial setting.
Under this setting, the attacker can only acquire the final classification labels by querying the target model.
We propose to automatically discover decision-based adversarial attack algorithms.
arXiv Detail & Related papers (2021-05-09T13:15:10Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.