Detect Professional Malicious User with Metric Learning in Recommender
Systems
- URL: http://arxiv.org/abs/2205.09673v1
- Date: Thu, 19 May 2022 16:32:36 GMT
- Title: Detect Professional Malicious User with Metric Learning in Recommender
Systems
- Authors: Yuanbo Xu, Yongjian Yang, En Wang, Fuzhen Zhuang, Hui Xiong
- Abstract summary: In e-commerce, online retailers are usually suffering from professional malicious users (PMUs), who utilize negative reviews and low ratings to threaten the retailers for illegal profits.
We propose an unsupervised multi-modal learning model: MMD, which employs Metric learning for professional Malicious users Detection with both ratings and reviews.
- Score: 39.26521260453495
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In e-commerce, online retailers are usually suffering from professional
malicious users (PMUs), who utilize negative reviews and low ratings to their
consumed products on purpose to threaten the retailers for illegal profits.
Specifically, there are three challenges for PMU detection: 1) professional
malicious users do not conduct any abnormal or illegal interactions (they never
concurrently leave too many negative reviews and low ratings at the same time),
and they conduct masking strategies to disguise themselves. Therefore,
conventional outlier detection methods are confused by their masking
strategies. 2) the PMU detection model should take both ratings and reviews
into consideration, which makes PMU detection a multi-modal problem. 3) there
are no datasets with labels for professional malicious users in public, which
makes PMU detection an unsupervised learning problem. To this end, we propose
an unsupervised multi-modal learning model: MMD, which employs Metric learning
for professional Malicious users Detection with both ratings and reviews. MMD
first utilizes a modified RNN to project the informational review into a
sentiment score, which jointly considers the ratings and reviews. Then
professional malicious user profiling (MUP) is proposed to catch the sentiment
gap between sentiment scores and ratings. MUP filters the users and builds a
candidate PMU set. We apply a metric learning-based clustering to learn a
proper metric matrix for PMU detection. Finally, we can utilize this metric and
labeled users to detect PMUs. Specifically, we apply the attention mechanism in
metric learning to improve the model's performance. The extensive experiments
in four datasets demonstrate that our proposed method can solve this
unsupervised detection problem. Moreover, the performance of the
state-of-the-art recommender models is enhanced by taking MMD as a
preprocessing stage.
Related papers
- Unlearning Comparator: A Visual Analytics System for Comparative Evaluation of Machine Unlearning Methods [23.6050988823262]
Machine Unlearning (MU) aims to remove target training data from a trained model so that the removed data no longer influences the model's behavior.<n>Yet, researchers in this rapidly emerging field face challenges in analyzing and understanding the behavior of different MU methods.<n>We introduce a visual analytics system, Unlearning Comparator, designed to facilitate the systematic evaluation of MU methods.
arXiv Detail & Related papers (2025-08-18T08:53:53Z) - The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning [87.1610740406279]
White House Executive Order on Artificial Intelligence highlights the risks of large language models (LLMs) empowering malicious actors in developing biological, cyber, and chemical weapons.
Current evaluations are private, preventing further research into mitigating risk.
We publicly release the Weapons of Mass Destruction Proxy benchmark, a dataset of 3,668 multiple-choice questions.
arXiv Detail & Related papers (2024-03-05T18:59:35Z) - SeGA: Preference-Aware Self-Contrastive Learning with Prompts for
Anomalous User Detection on Twitter [14.483830120541894]
We propose SeGA, preference-aware self-contrastive learning for anomalous user detection.
SeGA uses large language models to summarize user preferences via posts.
We empirically validate the effectiveness of the model design and pre-training strategies.
arXiv Detail & Related papers (2023-12-17T05:35:28Z) - EMShepherd: Detecting Adversarial Samples via Side-channel Leakage [6.868995628617191]
Adversarial attacks have disastrous consequences for deep learning-empowered critical applications.
We propose a framework, EMShepherd, to capture electromagnetic traces of model execution, perform processing on traces and exploit them for adversarial detection.
We demonstrate that our air-gapped EMShepherd can effectively detect different adversarial attacks on a commonly used FPGA deep learning accelerator.
arXiv Detail & Related papers (2023-03-27T19:38:55Z) - Evaluating Machine Unlearning via Epistemic Uncertainty [78.27542864367821]
This work presents an evaluation of Machine Unlearning algorithms based on uncertainty.
This is the first definition of a general evaluation of our best knowledge.
arXiv Detail & Related papers (2022-08-23T09:37:31Z) - Debiasing Learning for Membership Inference Attacks Against Recommender
Systems [79.48353547307887]
Learned recommender systems may inadvertently leak information about their training data, leading to privacy violations.
We investigate privacy threats faced by recommender systems through the lens of membership inference.
We propose a Debiasing Learning for Membership Inference Attacks against recommender systems (DL-MIA) framework that has four main components.
arXiv Detail & Related papers (2022-06-24T17:57:34Z) - Prepare for Trouble and Make it Double. Supervised and Unsupervised
Stacking for AnomalyBased Intrusion Detection [4.56877715768796]
We propose the adoption of meta-learning, in the form of a two-layer Stacker, to create a mixed approach that detects both known and unknown threats.
It turns out to be more effective in detecting zero-day attacks than supervised algorithms, limiting their main weakness but still maintaining adequate capabilities in detecting known attacks.
arXiv Detail & Related papers (2022-02-28T08:41:32Z) - Partial Bandit and Semi-Bandit: Making the Most Out of Scarce Users'
Feedback [62.997667081978825]
We present a novel approach for considering user feedback and evaluate it using three distinct strategies.
Despite a limited number of feedbacks returned by users (as low as 20% of the total), our approach obtains similar results to those of state of the art approaches.
arXiv Detail & Related papers (2020-09-16T07:32:51Z) - Robust Spammer Detection by Nash Reinforcement Learning [64.80986064630025]
We develop a minimax game where the spammers and spam detectors compete with each other on their practical goals.
We show that an optimization algorithm can reliably find an equilibrial detector that can robustly prevent spammers with any mixed spamming strategies from attaining their practical goal.
arXiv Detail & Related papers (2020-06-10T21:18:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.