Improving Information Cascade Modeling by Social Topology and Dual Role
User Dependency
- URL: http://arxiv.org/abs/2204.08529v1
- Date: Thu, 7 Apr 2022 14:26:33 GMT
- Title: Improving Information Cascade Modeling by Social Topology and Dual Role
User Dependency
- Authors: Baichuan Liu, Deqing Yang, Yueyi Wang, Yuchen Shi
- Abstract summary: We propose a non-sequential information cascade model named as TAN-DRUD (Topology-aware Attention Networks with Dual Role User Dependency)
TAN-DRUD obtains satisfactory performance on information cascade modeling through capturing the dual role user dependencies of information sender and receiver.
Our experiments on three cascade datasets demonstrate that our model is not only superior to the state-of-the-art cascade models, but also capable of exploiting topology information and inferring diffusion trees.
- Score: 3.3497820777154614
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In the last decade, information diffusion (also known as information cascade)
on social networks has been massively investigated due to its application
values in many fields. In recent years, many sequential models including those
models based on recurrent neural networks have been broadly employed to predict
information cascade. However, the user dependencies in a cascade sequence
captured by sequential models are generally unidirectional and inconsistent
with diffusion trees. For example, the true trigger of a successor may be a
non-immediate predecessor rather than the immediate predecessor in the
sequence. To capture user dependencies more sufficiently which are crucial to
precise cascade modeling, we propose a non-sequential information cascade model
named as TAN-DRUD (Topology-aware Attention Networks with Dual Role User
Dependency). TAN-DRUD obtains satisfactory performance on information cascade
modeling through capturing the dual role user dependencies of information
sender and receiver, which is inspired by the classic communication theory.
Furthermore, TANDRUD incorporates social topology into two-level attention
networks for enhanced information diffusion prediction. Our extensive
experiments on three cascade datasets demonstrate that our model is not only
superior to the state-of-the-art cascade models, but also capable of exploiting
topology information and inferring diffusion trees.
Related papers
- Hierarchical Information Enhancement Network for Cascade Prediction in Social Networks [51.54002032659713]
We propose a novel Hierarchical Information Enhancement Network (HIENet) for cascade prediction.
Our approach integrates fundamental cascade sequence, user social graphs, and sub-cascade graph into a unified framework.
arXiv Detail & Related papers (2024-03-22T14:57:27Z) - Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - HierCas: Hierarchical Temporal Graph Attention Networks for Popularity Prediction in Information Cascades [25.564185461383655]
Information cascade popularity prediction is critical for many applications, including but not limited to identifying fake news and accurate recommendations.
Traditional feature-based methods rely on handcrafted features, which are domain-specific and lack generalizability to new domains.
We propose Hierarchical Temporal Graph Attention Networks for cascade popularity prediction (HierCas), which operates on the entire cascade graph by a dynamic graph modeling approach.
arXiv Detail & Related papers (2023-10-20T01:55:10Z) - Preference Enhanced Social Influence Modeling for Network-Aware Cascade
Prediction [59.221668173521884]
We propose a novel framework to promote cascade size prediction by enhancing the user preference modeling.
Our end-to-end method makes the user activating process of information diffusion more adaptive and accurate.
arXiv Detail & Related papers (2022-04-18T09:25:06Z) - Characterizing and Understanding the Behavior of Quantized Models for
Reliable Deployment [32.01355605506855]
Quantization-aware training can produce more stable models than standard, adversarial, and Mixup training.
Disagreements often have closer top-1 and top-2 output probabilities, and $Margin$ is a better indicator than the other uncertainty metrics to distinguish disagreements.
We opensource our code and models as a new benchmark for further studying the quantized models.
arXiv Detail & Related papers (2022-04-08T11:19:16Z) - How Well Do Sparse Imagenet Models Transfer? [75.98123173154605]
Transfer learning is a classic paradigm by which models pretrained on large "upstream" datasets are adapted to yield good results on "downstream" datasets.
In this work, we perform an in-depth investigation of this phenomenon in the context of convolutional neural networks (CNNs) trained on the ImageNet dataset.
We show that sparse models can match or even outperform the transfer performance of dense models, even at high sparsities.
arXiv Detail & Related papers (2021-11-26T11:58:51Z) - Multi-Scale Semantics-Guided Neural Networks for Efficient
Skeleton-Based Human Action Recognition [140.18376685167857]
A simple yet effective multi-scale semantics-guided neural network is proposed for skeleton-based action recognition.
MS-SGN achieves the state-of-the-art performance on the NTU60, NTU120, and SYSU datasets.
arXiv Detail & Related papers (2021-11-07T03:50:50Z) - Streaming Graph Neural Networks via Continual Learning [31.810308087441445]
Graph neural networks (GNNs) have achieved strong performance in various applications.
In this paper, we propose a streaming GNN model based on continual learning.
We show that our model can efficiently update model parameters and achieve comparable performance to model retraining.
arXiv Detail & Related papers (2020-09-23T06:52:30Z) - Deep Collaborative Embedding for information cascade prediction [58.90540495232209]
We propose a novel model called Deep Collaborative Embedding (DCE) for information cascade prediction.
We propose an auto-encoder based collaborative embedding framework to learn the node embeddings with cascade collaboration and node collaboration.
The results of extensive experiments conducted on real-world datasets verify the effectiveness of our approach.
arXiv Detail & Related papers (2020-01-18T13:32:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.