Improving Social Media Popularity Prediction with Multiple Post
Dependencies
- URL: http://arxiv.org/abs/2307.15413v1
- Date: Fri, 28 Jul 2023 09:06:50 GMT
- Title: Improving Social Media Popularity Prediction with Multiple Post
Dependencies
- Authors: Zhizhen Zhang, Xiaohui Xie, Mengyu Yang, Ye Tian, Yong Jiang, Yong Cui
- Abstract summary: We propose a novel prediction framework named Dependency-aware Sequence Network (DSN)
For intra-post dependency, DSN adopts a multimodal feature extractor with an efficient fine-tuning strategy to obtain task-specific representations from images and textual information of posts.
For inter-post dependency, DSN uses a hierarchical information propagation method to learn category representations that could better describe the difference between posts.
- Score: 33.517898847695136
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Social Media Popularity Prediction has drawn a lot of attention because of
its profound impact on many different applications, such as recommendation
systems and multimedia advertising. Despite recent efforts to leverage the
content of social media posts to improve prediction accuracy, many existing
models fail to fully exploit the multiple dependencies between posts, which are
important to comprehensively extract content information from posts. To tackle
this problem, we propose a novel prediction framework named Dependency-aware
Sequence Network (DSN) that exploits both intra- and inter-post dependencies.
For intra-post dependency, DSN adopts a multimodal feature extractor with an
efficient fine-tuning strategy to obtain task-specific representations from
images and textual information of posts. For inter-post dependency, DSN uses a
hierarchical information propagation method to learn category representations
that could better describe the difference between posts. DSN also exploits
recurrent networks with a series of gating layers for more flexible local
temporal processing abilities and multi-head attention for long-term
dependencies. The experimental results on the Social Media Popularity Dataset
demonstrate the superiority of our method compared to existing state-of-the-art
models.
Related papers
- Hierarchical Information Enhancement Network for Cascade Prediction in Social Networks [51.54002032659713]
We propose a novel Hierarchical Information Enhancement Network (HIENet) for cascade prediction.
Our approach integrates fundamental cascade sequence, user social graphs, and sub-cascade graph into a unified framework.
arXiv Detail & Related papers (2024-03-22T14:57:27Z) - Adaptive Dependency Learning Graph Neural Networks [5.653058780958551]
We propose a hybrid approach combining neural networks and statistical structure learning models to self-learn dependencies.
We demonstrate significantly improved performance using our proposed approach on real-world benchmark datasets without a pre-defined dependency graph.
arXiv Detail & Related papers (2023-12-06T20:56:23Z) - Multi-modal Representation Learning for Social Post Location Inference [7.911777986696313]
In this work, we propose a novel Multi-modal Representation Learning Framework (MRLF) capable of fusing different modalities of social posts for location inference.
To overcome the noisy user-generated textual content, we introduce a novel attention-based character-aware module.
The experimental results show that MRLF can make accurate location predictions and open a new door to understanding the multi-modal data of social posts for online inference tasks.
arXiv Detail & Related papers (2023-06-11T02:35:48Z) - Multi-Content Interaction Network for Few-Shot Segmentation [37.80624074068096]
Few-Shot COCO is challenging for limited support images and large intra-class appearance discrepancies.
We propose a Multi-Content Interaction Network (MCINet) to remedy this issue.
MCINet improves FSS by incorporating the low-level structural information from another query branch into the high-level semantic features.
arXiv Detail & Related papers (2023-03-11T04:21:59Z) - Multi-Behavior Hypergraph-Enhanced Transformer for Sequential
Recommendation [33.97708796846252]
We introduce a new Multi-Behavior Hypergraph-enhanced Transformer framework (MBHT) to capture both short-term and long-term cross-type behavior dependencies.
Specifically, a multi-scale Transformer is equipped with low-rank self-attention to jointly encode behavior-aware sequential patterns from fine-grained and coarse-grained levels.
arXiv Detail & Related papers (2022-07-12T15:07:21Z) - HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain
Language Model Compression [53.90578309960526]
Large pre-trained language models (PLMs) have shown overwhelming performances compared with traditional neural network methods.
We propose a hierarchical relational knowledge distillation (HRKD) method to capture both hierarchical and domain relational information.
arXiv Detail & Related papers (2021-10-16T11:23:02Z) - Variational Attention: Propagating Domain-Specific Knowledge for
Multi-Domain Learning in Crowd Counting [75.80116276369694]
In crowd counting, due to the problem of laborious labelling, it is perceived intractability of collecting a new large-scale dataset.
We resort to the multi-domain joint learning and propose a simple but effective Domain-specific Knowledge Propagating Network (DKPNet)
It is mainly achieved by proposing the novel Variational Attention(VA) technique for explicitly modeling the attention distributions for different domains.
arXiv Detail & Related papers (2021-08-18T08:06:37Z) - Learning to Combine: Knowledge Aggregation for Multi-Source Domain
Adaptation [56.694330303488435]
We propose a Learning to Combine for Multi-Source Domain Adaptation (LtC-MSDA) framework.
In the nutshell, a knowledge graph is constructed on the prototypes of various domains to realize the information propagation among semantically adjacent representations.
Our approach outperforms existing methods with a remarkable margin.
arXiv Detail & Related papers (2020-07-17T07:52:44Z) - Connecting the Dots: Multivariate Time Series Forecasting with Graph
Neural Networks [91.65637773358347]
We propose a general graph neural network framework designed specifically for multivariate time series data.
Our approach automatically extracts the uni-directed relations among variables through a graph learning module.
Our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets.
arXiv Detail & Related papers (2020-05-24T04:02:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.