Graph Bayesian Optimization for Multiplex Influence Maximization
- URL: http://arxiv.org/abs/2403.18866v1
- Date: Mon, 25 Mar 2024 14:50:01 GMT
- Title: Graph Bayesian Optimization for Multiplex Influence Maximization
- Authors: Zirui Yuan, Minglai Shao, Zhiqian Chen,
- Abstract summary: Influence (IM) is the problem of identifying a limited number of initial influential users within a social network to maximize the number of influenced users.
Previous research has mostly focused on individual information propagation, neglecting the simultaneous and interactive dissemination of multiple information items.
This paper first formulates the Multiplex Maximization (Multi-IM) problem using diffusion models with an information association mechanism.
- Score: 9.155955744238852
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Influence maximization (IM) is the problem of identifying a limited number of initial influential users within a social network to maximize the number of influenced users. However, previous research has mostly focused on individual information propagation, neglecting the simultaneous and interactive dissemination of multiple information items. In reality, when users encounter a piece of information, such as a smartphone product, they often associate it with related products in their minds, such as earphones or computers from the same brand. Additionally, information platforms frequently recommend related content to users, amplifying this cascading effect and leading to multiplex influence diffusion. This paper first formulates the Multiplex Influence Maximization (Multi-IM) problem using multiplex diffusion models with an information association mechanism. In this problem, the seed set is a combination of influential users and information. To effectively manage the combinatorial complexity, we propose Graph Bayesian Optimization for Multi-IM (GBIM). The multiplex diffusion process is thoroughly investigated using a highly effective global kernelized attention message-passing module. This module, in conjunction with Bayesian linear regression (BLR), produces a scalable surrogate model. A data acquisition module incorporating the exploration-exploitation trade-off is developed to optimize the seed set further. Extensive experiments on synthetic and real-world datasets have proven our proposed framework effective. The code is available at https://github.com/zirui-yuan/GBIM.
Related papers
- Enhancing Cross-Domain Recommendations with Memory-Optimized LLM-Based User Agents [28.559223475725137]
Large Language Model (LLM)-based user agents have emerged as a powerful tool for improving recommender systems.
We introduce AgentCF++, a novel framework featuring a dual-layer memory architecture and a two-step fusion mechanism.
arXiv Detail & Related papers (2025-02-19T16:02:59Z) - REM: A Scalable Reinforced Multi-Expert Framework for Multiplex Influence Maximization [3.275046031354923]
In social online platforms, identifying influential seed users to maximize influence spread is a crucial task.
We propose the Reinforced Expert Maximization framework (REM) to address these issues.
REM surpasses state-of-the-art methods in terms of influence spread, scalability, and inference time in influence tasks.
arXiv Detail & Related papers (2025-01-01T09:13:09Z) - InterFormer: Towards Effective Heterogeneous Interaction Learning for Click-Through Rate Prediction [72.50606292994341]
We propose a novel module named InterFormer to learn heterogeneous information interaction in an interleaving style.
Our proposed InterFormer achieves state-of-the-art performance on three public datasets and a large-scale industrial dataset.
arXiv Detail & Related papers (2024-11-15T00:20:36Z) - LLM-assisted Explicit and Implicit Multi-interest Learning Framework for Sequential Recommendation [50.98046887582194]
We propose an explicit and implicit multi-interest learning framework to model user interests on two levels: behavior and semantics.
The proposed EIMF framework effectively and efficiently combines small models with LLM to improve the accuracy of multi-interest modeling.
arXiv Detail & Related papers (2024-11-14T13:00:23Z) - Triple Modality Fusion: Aligning Visual, Textual, and Graph Data with Large Language Models for Multi-Behavior Recommendations [13.878297630442674]
This paper introduces a novel framework for multi-behavior recommendations, leveraging the fusion of triple-modality.
Our proposed model called Triple Modality Fusion (TMF) utilizes the power of large language models (LLMs) to align and integrate these three modalities.
Extensive experiments demonstrate the effectiveness of our approach in improving recommendation accuracy.
arXiv Detail & Related papers (2024-10-16T04:44:15Z) - Many-Objective Evolutionary Influence Maximization: Balancing Spread, Budget, Fairness, and Time [3.195234044113248]
The Influence Maximization (IM) problem seeks to discover the set of nodes in a graph that can spread the information propagation at most.
This problem is known to be NP-hard, and it is usually studied by maximizing the influence (spread) and,Alternatively, optimizing a second objective.
In this work, we propose a first case study where several IM-specific objective functions, namely budget fairness, communities, and time, are optimized on top of influence and minimization of the seed set size.
arXiv Detail & Related papers (2024-03-27T16:54:45Z) - Compressed Interaction Graph based Framework for Multi-behavior
Recommendation [46.16750419508853]
It is challenging to explore multi-behavior data due to the unbalanced data distribution and sparse target behavior.
We propose CIGF, a Compressed Interaction Graph based Framework, to overcome the above limitations.
We propose a Multi-Expert with Separate Input (MESI) network with separate input on the top of CIGCN for multi-task learning.
arXiv Detail & Related papers (2023-03-04T13:41:36Z) - Learning with MISELBO: The Mixture Cookbook [62.75516608080322]
We present the first ever mixture of variational approximations for a normalizing flow-based hierarchical variational autoencoder (VAE) with VampPrior and a PixelCNN decoder network.
We explain this cooperative behavior by drawing a novel connection between VI and adaptive importance sampling.
We obtain state-of-the-art results among VAE architectures in terms of negative log-likelihood on the MNIST and FashionMNIST datasets.
arXiv Detail & Related papers (2022-09-30T15:01:35Z) - Coarse-to-Fine Knowledge-Enhanced Multi-Interest Learning Framework for
Multi-Behavior Recommendation [52.89816309759537]
Multi-types of behaviors (e.g., clicking, adding to cart, purchasing, etc.) widely exist in most real-world recommendation scenarios.
The state-of-the-art multi-behavior models learn behavior dependencies indistinguishably with all historical interactions as input.
We propose a novel Coarse-to-fine Knowledge-enhanced Multi-interest Learning framework to learn shared and behavior-specific interests for different behaviors.
arXiv Detail & Related papers (2022-08-03T05:28:14Z) - Disentangled Graph Collaborative Filtering [100.26835145396782]
Disentangled Graph Collaborative Filtering (DGCF) is a new model for learning informative representations of users and items from interaction data.
By modeling a distribution over intents for each user-item interaction, we iteratively refine the intent-aware interaction graphs and representations.
DGCF achieves significant improvements over several state-of-the-art models like NGCF, DisenGCN, and MacridVAE.
arXiv Detail & Related papers (2020-07-03T15:37:25Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.