Top influencers can be identified universally by combining classical
centralities
- URL: http://arxiv.org/abs/2006.07657v2
- Date: Tue, 4 Aug 2020 12:33:44 GMT
- Title: Top influencers can be identified universally by combining classical
centralities
- Authors: Doina Bucur
- Abstract summary: No single centrality has consistently good ranking power.
Certain pairs of centralities cooperate particularly well in statistically drawing the boundary between the top spreaders and the rest.
nodes selected as superspreaders will usually jointly maximise the values of both centralities.
- Score: 0.6853165736531939
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Information flow, opinion, and epidemics spread over structured networks.
When using individual node centrality indicators to predict which nodes will be
among the top influencers or spreaders in a large network, no single centrality
has consistently good ranking power. We show that statistical classifiers using
two or more centralities as input are instead consistently predictive over many
diverse, static real-world topologies. Certain pairs of centralities cooperate
particularly well in statistically drawing the boundary between the top
spreaders and the rest: local centralities measuring the size of a node's
neighbourhood benefit from the addition of a global centrality such as the
eigenvector centrality, closeness, or the core number. This is, intuitively,
because a local centrality may rank highly some nodes which are located in
dense, but peripheral regions of the network---a situation in which an
additional global centrality indicator can help by prioritising nodes located
more centrally. The nodes selected as superspreaders will usually jointly
maximise the values of both centralities. As a result of the interplay between
centrality indicators, training classifiers with seven classical indicators
leads to a nearly maximum average precision function (0.995) across the
networks in this study.
Related papers
- Networked Communication for Mean-Field Games with Function Approximation and Empirical Mean-Field Estimation [59.01527054553122]
Decentralised agents can learn equilibria in Mean-Field Games from a single, non-episodic run of the empirical system.
We introduce function approximation to the existing setting, drawing on the Munchausen Online Mirror Descent method.
We additionally provide new algorithms that allow agents to estimate the global empirical distribution based on a local neighbourhood.
arXiv Detail & Related papers (2024-08-21T13:32:46Z) - Impact of network topology on the performance of Decentralized Federated
Learning [4.618221836001186]
Decentralized machine learning is gaining momentum, addressing infrastructure challenges and privacy concerns.
This study investigates the interplay between network structure and learning performance using three network topologies and six data distribution methods.
We highlight the challenges in transferring knowledge from peripheral to central nodes, attributed to a dilution effect during model aggregation.
arXiv Detail & Related papers (2024-02-28T11:13:53Z) - Core-Intermediate-Peripheral Index: Factor Analysis of Neighborhood and
Shortest Paths-based Centrality Metrics [0.0]
We propose a novel measure called the Core- Intermediate-Peripheral (CIP) Index to capture the extent with which a node could play the role of a core node.
We test our approach on a diverse suite of 12 complex real-world networks.
arXiv Detail & Related papers (2023-10-10T06:52:20Z) - Hierarchical Multi-Marginal Optimal Transport for Network Alignment [52.206006379563306]
Multi-network alignment is an essential prerequisite for joint learning on multiple networks.
We propose a hierarchical multi-marginal optimal transport framework named HOT for multi-network alignment.
Our proposed HOT achieves significant improvements over the state-of-the-art in both effectiveness and scalability.
arXiv Detail & Related papers (2023-10-06T02:35:35Z) - Centralized Feature Pyramid for Object Detection [53.501796194901964]
Visual feature pyramid has shown its superiority in both effectiveness and efficiency in a wide range of applications.
In this paper, we propose a OLO Feature Pyramid for object detection, which is based on a globally explicit centralized feature regulation.
arXiv Detail & Related papers (2022-10-05T08:32:54Z) - Comparative evaluation of community-aware centrality measures [1.7243339961137643]
We investigate seven influential community-aware centrality measures in an epidemic spreading process scenario using the Susceptible-Infected-Recovered (SIR) model.
Results show that generally, the correlation between community-aware centrality measures is low.
In a multiple-spreader problem, when resources are available, targeting distant hubs using Modularity Vitality is more effective.
arXiv Detail & Related papers (2022-05-14T07:43:26Z) - CenGCN: Centralized Convolutional Networks with Vertex Imbalance for
Scale-Free Graphs [38.427695265783726]
We propose a novel centrality-based framework named CenGCN to address the inequality of information.
We present two variants CenGCN_D and CenGCN_E, based on degree centrality and eigenvector centrality, respectively.
Results demonstrate that the two variants significantly outperform state-of-the-art baselines.
arXiv Detail & Related papers (2022-02-16T02:18:16Z) - A Modular Framework for Centrality and Clustering in Complex Networks [0.6423239719448168]
In this paper, we study two important such network analysis techniques, namely, centrality and clustering.
An information-flow based model is adopted for clustering, which itself builds upon an information theoretic measure for computing centrality.
Our clustering naturally inherits the flexibility to accommodate edge directionality, as well as different interpretations and interplay between edge weights and node degrees.
arXiv Detail & Related papers (2021-11-23T03:01:29Z) - Consensus Control for Decentralized Deep Learning [72.50487751271069]
Decentralized training of deep learning models enables on-device learning over networks, as well as efficient scaling to large compute clusters.
We show in theory that when the training consensus distance is lower than a critical quantity, decentralized training converges as fast as the centralized counterpart.
Our empirical insights allow the principled design of better decentralized training schemes that mitigate the performance drop.
arXiv Detail & Related papers (2021-02-09T13:58:33Z) - Unsupervised Differentiable Multi-aspect Network Embedding [52.981277420394846]
We propose a novel end-to-end framework for multi-aspect network embedding, called asp2vec.
Our proposed framework can be readily extended to heterogeneous networks.
arXiv Detail & Related papers (2020-06-07T19:26:20Z) - Quantized Decentralized Stochastic Learning over Directed Graphs [52.94011236627326]
We consider a decentralized learning problem where data points are distributed among computing nodes communicating over a directed graph.
As the model size gets large, decentralized learning faces a major bottleneck that is the communication load due to each node transmitting messages (model updates) to its neighbors.
We propose the quantized decentralized learning algorithm over directed graphs that is based on the push-sum algorithm in decentralized consensus optimization.
arXiv Detail & Related papers (2020-02-23T18:25:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.