A Multi-Transformation Evolutionary Framework for Influence Maximization
in Social Networks
- URL: http://arxiv.org/abs/2204.03297v1
- Date: Thu, 7 Apr 2022 08:53:42 GMT
- Title: A Multi-Transformation Evolutionary Framework for Influence Maximization
in Social Networks
- Authors: Chao Wang, Jiaxuan Zhao, Lingling Li, Licheng Jiao, Jing Liu, Kai Wu
- Abstract summary: We propose a multi-transformation evolutionary framework for influence transformation (MTEFIM) to exploit potential similarities and unique advantages of alternate transformations.
MTEFIM can efficiently utilize the potentially transferable knowledge across multiple transformations to achieve highly competitive performance.
The MTEFIM is validated on four real-world social networks.
- Score: 44.739573338273175
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Influence maximization is a key issue for mining the deep information of
social networks, which aims to select a seed set from the network to maximize
the number of influenced nodes. To evaluate the influence spread of a seed set
efficiently, existing works have proposed some proxy models (transformations)
with lower computational costs to replace the expensive Monte Carlo simulation
process. These alternate transformations based on network prior knowledge
induce different search behaviors with similar characteristics from various
perspectives. For a specific case, it is difficult for users to determine a
suitable transformation a priori. Keeping those in mind, we propose a
multi-transformation evolutionary framework for influence maximization (MTEFIM)
to exploit the potential similarities and unique advantages of alternate
transformations and avoid users to determine the most suitable one manually. In
MTEFIM, multiple transformations are optimized simultaneously as multiple
tasks. Each transformation is assigned an evolutionary solver. Three major
components of MTEFIM are conducted: 1) estimating the potential relationship
across transformations based on the degree of overlap across individuals (seed
sets) of different populations, 2) transferring individuals across populations
adaptively according to the inter-transformation relationship, 3) selecting the
final output seed set containing all the proxy model knowledge. The
effectiveness of MTEFIM is validated on four real-world social networks.
Experimental results show that MTEFIM can efficiently utilize the potentially
transferable knowledge across multiple transformations to achieve highly
competitive performance compared to several popular IM-specific methods. The
implementation of MTEFIM can be accessed at
https://github.com/xiaofangxd/MTEFIM.
Related papers
- Investigating the potential of Sparse Mixtures-of-Experts for multi-domain neural machine translation [59.41178047749177]
We focus on multi-domain Neural Machine Translation, with the goal of developing efficient models which can handle data from various domains seen during training and are robust to domains unseen during training.
We hypothesize that Sparse Mixture-of-Experts (SMoE) models are a good fit for this task, as they enable efficient model scaling.
We conduct a series of experiments aimed at validating the utility of SMoE for the multi-domain scenario, and find that a straightforward width scaling of Transformer is a simpler and surprisingly more efficient approach in practice, and reaches the same performance level as SMoE.
arXiv Detail & Related papers (2024-07-01T09:45:22Z) - Multi-Domain Evolutionary Optimization of Network Structures [25.658524436665637]
We develop a novel framework for multi-domain evolutionary optimization (MDEO)
Experiments on eight real-world networks of different domains demonstrate MDEO superiority in efficacy compared to classical evolutionary optimization.
Simulations of attacks on the community validate the effectiveness of the proposed MDEO in safeguarding community security.
arXiv Detail & Related papers (2024-06-21T04:53:39Z) - Learning to Transform Dynamically for Better Adversarial Transferability [32.267484632957576]
Adversarial examples, crafted by adding perturbations imperceptible to humans, can deceive neural networks.
We introduce a novel approach named Learning to Transform (L2T)
L2T increases the diversity of transformed images by selecting the optimal combination of operations from a pool of candidates.
arXiv Detail & Related papers (2024-05-23T00:46:53Z) - Unleashing Network Potentials for Semantic Scene Completion [50.95486458217653]
This paper proposes a novel SSC framework - Adrial Modality Modulation Network (AMMNet)
AMMNet introduces two core modules: a cross-modal modulation enabling the interdependence of gradient flows between modalities, and a customized adversarial training scheme leveraging dynamic gradient competition.
Extensive experimental results demonstrate that AMMNet outperforms state-of-the-art SSC methods by a large margin.
arXiv Detail & Related papers (2024-03-12T11:48:49Z) - MIM-Reasoner: Learning with Theoretical Guarantees for Multiplex
Influence Maximization [22.899884160183596]
Multiplex influence (MIM) asks us to identify a set of seed users such as to maximize the expected number of influenced users in a multiplex network.
We introduce MIM-Reasoner, which captures the complex propagation process within and between layers of a given multiplex network.
arXiv Detail & Related papers (2024-02-24T03:48:22Z) - Feature Decoupling-Recycling Network for Fast Interactive Segmentation [79.22497777645806]
Recent interactive segmentation methods iteratively take source image, user guidance and previously predicted mask as the input.
We propose the Feature Decoupling-Recycling Network (FDRN), which decouples the modeling components based on their intrinsic discrepancies.
arXiv Detail & Related papers (2023-08-07T12:26:34Z) - Deep Graph Representation Learning and Optimization for Influence
Maximization [10.90744025490539]
In Influence (IM) is formulated as selecting a set of initial users from a social network to maximize the expected number of influenced users.
We propose a novel framework DeepIM to generatively characterize the latent representation of seed sets.
We also design a novel objective function to infer optimal seed sets under flexible node-centrality-based budget constraints.
arXiv Detail & Related papers (2023-05-01T15:45:01Z) - Improving Diversity with Adversarially Learned Transformations for
Domain Generalization [81.26960899663601]
We present a novel framework that uses adversarially learned transformations (ALT) using a neural network to model plausible, yet hard image transformations.
We show that ALT can naturally work with existing diversity modules to produce highly distinct, and large transformations of the source domain leading to state-of-the-art performance.
arXiv Detail & Related papers (2022-06-15T18:05:24Z) - The Self-Optimal-Transport Feature Transform [2.804721532913997]
We show how to upgrade the set of features of a data instance to facilitate downstream matching or grouping related tasks.
A particular min-cost-max-flow fractional matching problem, whose entropy regularized version can be approximated by an optimal transport (OT) optimization, results in our transductive transform.
Empirically, the transform is highly effective and flexible in its use, consistently improving networks it is inserted into.
arXiv Detail & Related papers (2022-04-06T20:00:39Z) - Rich CNN-Transformer Feature Aggregation Networks for Super-Resolution [50.10987776141901]
Recent vision transformers along with self-attention have achieved promising results on various computer vision tasks.
We introduce an effective hybrid architecture for super-resolution (SR) tasks, which leverages local features from CNNs and long-range dependencies captured by transformers.
Our proposed method achieves state-of-the-art SR results on numerous benchmark datasets.
arXiv Detail & Related papers (2022-03-15T06:52:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.