Opportunities and challenges in partitioning the graph measure space of
real-world networks
- URL: http://arxiv.org/abs/2106.10753v1
- Date: Sun, 20 Jun 2021 21:22:37 GMT
- Title: Opportunities and challenges in partitioning the graph measure space of
real-world networks
- Authors: M\'at\'e J\'ozsa, Alp\'ar S. L\'az\'ar and Zsolt I. L\'az\'ar
- Abstract summary: based on a large dataset containing thousands of real-world networks ranging from genetic, protein interaction, and metabolic networks to brain, language, ecology, and social networks we search for defining structural measures of the different complex network domains (CND)
We calculate 208 measures for all networks and using a comprehensive and scrupulous workflow of statistical and machine learning methods we investigated the limitations and possibilities of identifying the key graph measures of CNDs.
Our approach managed to identify well distinguishable groups of network domains and confer their relevant features.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Based on a large dataset containing thousands of real-world networks ranging
from genetic, protein interaction, and metabolic networks to brain, language,
ecology, and social networks we search for defining structural measures of the
different complex network domains (CND). We calculate 208 measures for all
networks and using a comprehensive and scrupulous workflow of statistical and
machine learning methods we investigated the limitations and possibilities of
identifying the key graph measures of CNDs. Our approach managed to identify
well distinguishable groups of network domains and confer their relevant
features. These features turn out to be CND specific and not unique even at the
level of individual CNDs. The presented methodology may be applied to other
similar scenarios involving highly unbalanced and skewed datasets.
Related papers
- Network classification through random walks [0.7373617024876725]
We introduce a novel approach to characterize networks using statistics from random walks.<n>We compare their performance on multiple datasets with other state-of-the-art feature extraction methods.
arXiv Detail & Related papers (2025-05-27T19:43:32Z) - Unsupervised Graph Attention Autoencoder for Attributed Networks using
K-means Loss [0.0]
We introduce a simple, efficient, and clustering-oriented model based on unsupervised textbfGraph Attention textbfAutotextbfEncoder for community detection in attributed networks.
The proposed model adeptly learns representations from both the network's topology and attribute information, simultaneously addressing dual objectives: reconstruction and community discovery.
arXiv Detail & Related papers (2023-11-21T20:45:55Z) - Quasi-orthogonality and intrinsic dimensions as measures of learning and
generalisation [55.80128181112308]
We show that dimensionality and quasi-orthogonality of neural networks' feature space may jointly serve as network's performance discriminants.
Our findings suggest important relationships between the networks' final performance and properties of their randomly initialised feature spaces.
arXiv Detail & Related papers (2022-03-30T21:47:32Z) - Learning to Detect Critical Nodes in Sparse Graphs via Feature Importance Awareness [53.351863569314794]
The critical node problem (CNP) aims to find a set of critical nodes from a network whose deletion maximally degrades the pairwise connectivity of the residual network.
This work proposes a feature importance-aware graph attention network for node representation.
It combines it with dueling double deep Q-network to create an end-to-end algorithm to solve CNP for the first time.
arXiv Detail & Related papers (2021-12-03T14:23:05Z) - Unsupervised Domain-adaptive Hash for Networks [81.49184987430333]
Domain-adaptive hash learning has enjoyed considerable success in the computer vision community.
We develop an unsupervised domain-adaptive hash learning method for networks, dubbed UDAH.
arXiv Detail & Related papers (2021-08-20T12:09:38Z) - Topological Uncertainty: Monitoring trained neural networks through
persistence of activation graphs [0.9786690381850356]
In industrial applications, data coming from an open-world setting might widely differ from the benchmark datasets on which a network was trained.
We develop a method to monitor trained neural networks based on the topological properties of their activation graphs.
arXiv Detail & Related papers (2021-05-07T14:16:03Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z) - GAHNE: Graph-Aggregated Heterogeneous Network Embedding [32.44836376873812]
Heterogeneous network embedding aims to embed nodes into low-dimensional vectors which capture rich intrinsic information of heterogeneous networks.
Existing models either depend on manually designing meta-paths, ignore mutual effects between different semantics, or omit some aspects of information from global networks.
In GAHNE model, we develop several mechanisms that can aggregate semantic representations from different single-type sub-networks as well as fuse the global information into final embeddings.
arXiv Detail & Related papers (2020-12-23T07:11:30Z) - Joint Inference of Diffusion and Structure in Partially Observed Social
Networks Using Coupled Matrix Factorization [3.399624105745357]
In this paper, a model is learned from partially observed data to infer unobserved diffusion and structure networks.
The interrelations among links of nodes and cascade processes are utilized in the proposed method via learning and low-dimensional latent factors.
Experiments on these synthetic and real-world datasets show that the proposed method successfully detects invisible social behaviors, predicts links, and identifies latent features.
arXiv Detail & Related papers (2020-10-03T17:48:57Z) - Neural networks adapting to datasets: learning network size and topology [77.34726150561087]
We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a gradient-based training.
The resulting network has the structure of a graph tailored to the particular learning task and dataset.
arXiv Detail & Related papers (2020-06-22T12:46:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.