SeGA: Preference-Aware Self-Contrastive Learning with Prompts for
Anomalous User Detection on Twitter
- URL: http://arxiv.org/abs/2312.11553v1
- Date: Sun, 17 Dec 2023 05:35:28 GMT
- Title: SeGA: Preference-Aware Self-Contrastive Learning with Prompts for
Anomalous User Detection on Twitter
- Authors: Ying-Ying Chang, Wei-Yao Wang, Wen-Chih Peng
- Abstract summary: We propose SeGA, preference-aware self-contrastive learning for anomalous user detection.
SeGA uses large language models to summarize user preferences via posts.
We empirically validate the effectiveness of the model design and pre-training strategies.
- Score: 14.483830120541894
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the dynamic and rapidly evolving world of social media, detecting
anomalous users has become a crucial task to address malicious activities such
as misinformation and cyberbullying. As the increasing number of anomalous
users improves the ability to mimic normal users and evade detection, existing
methods only focusing on bot detection are ineffective in terms of capturing
subtle distinctions between users. To address these challenges, we proposed
SeGA, preference-aware self-contrastive learning for anomalous user detection,
which leverages heterogeneous entities and their relations in the Twittersphere
to detect anomalous users with different malicious strategies. SeGA utilizes
the knowledge of large language models to summarize user preferences via posts.
In addition, integrating user preferences with prompts as pseudo-labels for
preference-aware self-contrastive learning enables the model to learn
multifaceted aspects for describing the behaviors of users. Extensive
experiments on the proposed TwBNT benchmark demonstrate that SeGA significantly
outperforms the state-of-the-art methods (+3.5\% ~ 27.6\%) and empirically
validate the effectiveness of the model design and pre-training strategies. Our
code and data are publicly available at https://github.com/ying0409/SeGA.
Related papers
- Leave No One Behind: Online Self-Supervised Self-Distillation for Sequential Recommendation [20.52842524024608]
Sequential recommendation methods play a pivotal role in modern recommendation systems.
Recent methods leverage contrastive learning to derive self-supervision signals.
We introduce a novel learning paradigm, named Online Self-Supervised Self-distillation for Sequential Recommendation.
arXiv Detail & Related papers (2024-03-22T12:27:21Z) - Causal Structure Representation Learning of Confounders in Latent Space
for Recommendation [6.839357057621987]
Inferring user preferences from the historical feedback of users is a valuable problem in recommender systems.
We consider the influence of confounders, disentangle them from user preferences in the latent space, and employ causal graphs to model their interdependencies.
arXiv Detail & Related papers (2023-11-02T08:46:07Z) - User Inference Attacks on Large Language Models [26.616016510555088]
Fine-tuning is a common and effective method for tailoring large language models (LLMs) to specialized tasks and applications.
We study the privacy implications of fine-tuning LLMs on user data.
arXiv Detail & Related papers (2023-10-13T17:24:52Z) - Online Corrupted User Detection and Regret Minimization [49.536254494829436]
In real-world online web systems, multiple users usually arrive sequentially into the system.
We present an important online learning problem named LOCUD to learn and utilize unknown user relations from disrupted behaviors.
We devise a novel online detection algorithm OCCUD based on RCLUB-WCU's inferred user relations.
arXiv Detail & Related papers (2023-10-07T10:20:26Z) - When Measures are Unreliable: Imperceptible Adversarial Perturbations
toward Top-$k$ Multi-Label Learning [83.8758881342346]
A novel loss function is devised to generate adversarial perturbations that could achieve both visual and measure imperceptibility.
Experiments on large-scale benchmark datasets demonstrate the superiority of our proposed method in attacking the top-$k$ multi-label systems.
arXiv Detail & Related papers (2023-07-27T13:18:47Z) - On the Universal Adversarial Perturbations for Efficient Data-free
Adversarial Detection [55.73320979733527]
We propose a data-agnostic adversarial detection framework, which induces different responses between normal and adversarial samples to UAPs.
Experimental results show that our method achieves competitive detection performance on various text classification tasks.
arXiv Detail & Related papers (2023-06-27T02:54:07Z) - Detecting and Quantifying Malicious Activity with Simulation-based
Inference [61.9008166652035]
We show experiments in malicious user identification using a model of regular and malicious users interacting with a recommendation algorithm.
We provide a novel simulation-based measure for quantifying the effects of a user or group of users on its dynamics.
arXiv Detail & Related papers (2021-10-06T03:39:24Z) - Hyper Meta-Path Contrastive Learning for Multi-Behavior Recommendation [61.114580368455236]
User purchasing prediction with multi-behavior information remains a challenging problem for current recommendation systems.
We propose the concept of hyper meta-path to construct hyper meta-paths or hyper meta-graphs to explicitly illustrate the dependencies among different behaviors of a user.
Thanks to the recent success of graph contrastive learning, we leverage it to learn embeddings of user behavior patterns adaptively instead of assigning a fixed scheme to understand the dependencies among different behaviors.
arXiv Detail & Related papers (2021-09-07T04:28:09Z) - From Implicit to Explicit feedback: A deep neural network for modeling
sequential behaviours and long-short term preferences of online users [3.464871689508835]
Implicit and explicit feedback have different roles for a useful recommendation.
We go from the hypothesis that a user's preference at a time is a combination of long-term and short-term interests.
arXiv Detail & Related papers (2021-07-26T16:59:20Z) - Discriminative Nearest Neighbor Few-Shot Intent Detection by
Transferring Natural Language Inference [150.07326223077405]
Few-shot learning is attracting much attention to mitigate data scarcity.
We present a discriminative nearest neighbor classification with deep self-attention.
We propose to boost the discriminative ability by transferring a natural language inference (NLI) model.
arXiv Detail & Related papers (2020-10-25T00:39:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.