Learning Parameters for Balanced Index Influence Maximization
- URL: http://arxiv.org/abs/2012.08067v1
- Date: Tue, 15 Dec 2020 03:30:54 GMT
- Title: Learning Parameters for Balanced Index Influence Maximization
- Authors: Manqing Ma, Gyorgy Korniss, Boleslaw K. Szymanski
- Abstract summary: We focus on a it Balance Index algorithm that relies on three parameters to tune its performance to the given network structure.
We create small snapshots from the given synthetic and large-scale real-world networks.
We train our machine-learning model on the snapshots and apply this model to the real-word network to find the best BI parameters.
- Score: 0.45119235878273
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Influence maximization is the task of finding the smallest set of nodes whose
activation in a social network can trigger an activation cascade that reaches
the targeted network coverage, where threshold rules determine the outcome of
influence. This problem is NP-hard and it has generated a significant amount of
recent research on finding efficient heuristics. We focus on a {\it Balance
Index} algorithm that relies on three parameters to tune its performance to the
given network structure. We propose using a supervised machine-learning
approach for such tuning. We select the most influential graph features for the
parameter tuning. Then, using random-walk-based graph-sampling, we create small
snapshots from the given synthetic and large-scale real-world networks. Using
exhaustive search, we find for these snapshots the high accuracy values of BI
parameters to use as a ground truth. Then, we train our machine-learning model
on the snapshots and apply this model to the real-word network to find the best
BI parameters. We apply these parameters to the sampled real-world network to
measure the quality of the sets of initiators found this way. We use various
real-world networks to validate our approach against other heuristic.
Related papers
- Neighborhood-Order Learning Graph Attention Network for Fake News Detection [2.34863357088666]
We propose a novel model called Neighborhood-Order Learning Graph Attention Network (NOL-GAT) for fake news detection.
This model allows each node in each layer to independently learn its optimal neighborhood order.
To evaluate the model's performance, experiments are conducted on various fake news datasets.
arXiv Detail & Related papers (2025-02-10T18:51:57Z) - Principled Architecture-aware Scaling of Hyperparameters [69.98414153320894]
Training a high-quality deep neural network requires choosing suitable hyperparameters, which is a non-trivial and expensive process.
In this work, we precisely characterize the dependence of initializations and maximal learning rates on the network architecture.
We demonstrate that network rankings can be easily changed by better training networks in benchmarks.
arXiv Detail & Related papers (2024-02-27T11:52:49Z) - Low-Rank Representations Meets Deep Unfolding: A Generalized and
Interpretable Network for Hyperspectral Anomaly Detection [41.50904949744355]
Current hyperspectral anomaly detection (HAD) benchmark datasets suffer from low resolution, simple background, and small size of the detection data.
These factors also limit the performance of the well-known low-rank representation (LRR) models in terms of robustness.
We build a new set of HAD benchmark datasets for improving the robustness of the HAD algorithm in complex scenarios, AIR-HAD for short.
arXiv Detail & Related papers (2024-02-23T14:15:58Z) - Neural Network Pruning by Gradient Descent [7.427858344638741]
We introduce a novel and straightforward neural network pruning framework that incorporates the Gumbel-Softmax technique.
We demonstrate its exceptional compression capability, maintaining high accuracy on the MNIST dataset with only 0.15% of the original network parameters.
We believe our method opens a promising new avenue for deep learning pruning and the creation of interpretable machine learning systems.
arXiv Detail & Related papers (2023-11-21T11:12:03Z) - Online Network Source Optimization with Graph-Kernel MAB [62.6067511147939]
We propose Grab-UCB, a graph- kernel multi-arms bandit algorithm to learn online the optimal source placement in large scale networks.
We describe the network processes with an adaptive graph dictionary model, which typically leads to sparse spectral representations.
We derive the performance guarantees that depend on network parameters, which further influence the learning curve of the sequential decision strategy.
arXiv Detail & Related papers (2023-07-07T15:03:42Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Improving Parametric Neural Networks for High-Energy Physics (and
Beyond) [0.0]
We aim at deepening the understanding of Parametric Neural Network (pNN) networks in light of real-world usage.
We propose an alternative parametrization scheme, resulting in a new parametrized neural network architecture: the AffinePNN.
We extensively evaluate our models on the HEPMASS dataset, along its imbalanced version (called HEPMASS-IMB)
arXiv Detail & Related papers (2022-02-01T14:18:43Z) - Network Inference and Influence Maximization from Samples [20.916163957596577]
We study the task of selecting a small number of seed nodes in a social network to maximize the spread of the influence from these seeds.
We provide a novel solution to the cascade to the network inference problem, that is, learning diffusion parameters and the network structure from the data.
Our IMS algorithms enhance the learning-and-then-optimization approach by allowing a constant approximation ratio even when the diffusion parameters are hard to learn.
arXiv Detail & Related papers (2021-06-07T08:06:36Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.