Inferring community characteristics in labelled networks
- URL: http://arxiv.org/abs/2105.13762v1
- Date: Fri, 28 May 2021 12:07:10 GMT
- Title: Inferring community characteristics in labelled networks
- Authors: Ioannis Kontoyiannis and Lawrence Tray
- Abstract summary: We introduce a new generative model, the feature-first block model (FFBM)
We present a method to efficiently sample from the posterior distribution of the FFBM parameters.
The main advantages of the proposed approach are that the whole feature-space is used automatically, and features can be rank-ordered implicitly according to impact.
- Score: 6.85316573653194
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Labelled networks form a very common and important class of data, naturally
appearing in numerous applications in science and engineering. A typical
inference goal is to determine how the vertex labels(or {\em features}) affect
the network's graph structure. A standard approach has been to partition the
network into blocks grouped by distinct values of the feature of interest. A
block-based random graph model -- typically a variant of the stochastic block
model -- is then used to test for evidence of asymmetric behaviour within these
feature-based communities. Nevertheless, the resulting communities often do not
produce a natural partition of the graph. In this work, we introduce a new
generative model, the feature-first block model (FFBM), which is more effective
at describing vertex-labelled undirected graphs and also facilitates the use of
richer queries on labelled networks. We develop a Bayesian framework for
inference with this model, and we present a method to efficiently sample from
the posterior distribution of the FFBM parameters. The FFBM's structure is kept
deliberately simple to retain easy interpretability of the parameter values. We
apply the proposed methods to a variety of network data to extract the most
important features along which the vertices are partitioned. The main
advantages of the proposed approach are that the whole feature-space is used
automatically, and features can be rank-ordered implicitly according to impact.
Any features that do not significantly impact the high-level structure can be
discarded to reduce the problem dimension. In cases where the vertex features
available do not readily explain the community structure in the resulting
network, the approach detects this and is protected against over-fitting.
Results on several real-world datasets illustrate the performance of the
proposed methods.
Related papers
- Boosting Graph Foundation Model from Structural Perspective [6.387816922598151]
We boost graph foundation model from structural perspective and propose BooG.
BooG constructs virtual super nodes to unify structural characteristics of graph data from different domains.
We also propose a novel pre-training objective based on contrastive learning, which learns more expressive representations for graph data.
arXiv Detail & Related papers (2024-07-29T12:22:16Z) - Chasing Fairness in Graphs: A GNN Architecture Perspective [73.43111851492593]
We propose textsfFair textsfMessage textsfPassing (FMP) designed within a unified optimization framework for graph neural networks (GNNs)
In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together.
Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three real-world datasets.
arXiv Detail & Related papers (2023-12-19T18:00:15Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Semi-Supervised Clustering of Sparse Graphs: Crossing the
Information-Theoretic Threshold [3.6052935394000234]
Block model is a canonical random graph model for clustering and community detection on network-structured data.
No estimator based on the network topology can perform substantially better than chance on sparse graphs if the model parameter is below a certain threshold.
We prove that with an arbitrary fraction of the labels feasible throughout the parameter domain.
arXiv Detail & Related papers (2022-05-24T00:03:25Z) - Towards Efficient Scene Understanding via Squeeze Reasoning [71.1139549949694]
We propose a novel framework called Squeeze Reasoning.
Instead of propagating information on the spatial map, we first learn to squeeze the input feature into a channel-wise global vector.
We show that our approach can be modularized as an end-to-end trained block and can be easily plugged into existing networks.
arXiv Detail & Related papers (2020-11-06T12:17:01Z) - Representation Learning of Graphs Using Graph Convolutional Multilayer
Networks Based on Motifs [17.823543937167848]
mGCMN is a novel framework which utilizes node feature information and the higher order local structure of the graph.
It will greatly improve the learning efficiency of the graph neural network and promote a brand-new learning mode establishment.
arXiv Detail & Related papers (2020-07-31T04:18:20Z) - Building powerful and equivariant graph neural networks with structural
message-passing [74.93169425144755]
We propose a powerful and equivariant message-passing framework based on two ideas.
First, we propagate a one-hot encoding of the nodes, in addition to the features, in order to learn a local context matrix around each node.
Second, we propose methods for the parametrization of the message and update functions that ensure permutation equivariance.
arXiv Detail & Related papers (2020-06-26T17:15:16Z) - Graph Neural Networks with Composite Kernels [60.81504431653264]
We re-interpret node aggregation from the perspective of kernel weighting.
We present a framework to consider feature similarity in an aggregation scheme.
We propose feature aggregation as the composition of the original neighbor-based kernel and a learnable kernel to encode feature similarities in a feature space.
arXiv Detail & Related papers (2020-05-16T04:44:29Z) - Residual Correlation in Graph Neural Network Regression [39.54530450932135]
We show that conditional independence assumption severely limits predictive power.
We address this problem with an interpretable and efficient framework.
Our framework achieves substantially higher accuracy than competing baselines.
arXiv Detail & Related papers (2020-02-19T16:32:54Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.