When does the mean network capture the topology of a sample of networks?
- URL: http://arxiv.org/abs/2408.03461v1
- Date: Tue, 6 Aug 2024 22:14:54 GMT
- Title: When does the mean network capture the topology of a sample of networks?
- Authors: François G Meyer,
- Abstract summary: This work is significant because it provides for the first time analytical estimates of the sample Fr'echet mean for the blockmodel.
We show that the mean network computed with the Hamming distance is unable to capture the topology of the networks in the training sample.
From a practical standpoint, our work informs the choice of metrics in the context where the sample Fr'echet mean network is used to characterise the topology of networks for network-valued machine learning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The notion of Fr\'echet mean (also known as "barycenter") network is the workhorse of most machine learning algorithms that require the estimation of a "location" parameter to analyse network-valued data. In this context, it is critical that the network barycenter inherits the topological structure of the networks in the training dataset. The metric - which measures the proximity between networks - controls the structural properties of the barycenter. This work is significant because it provides for the first time analytical estimates of the sample Fr\'echet mean for the stochastic blockmodel, which is at the cutting edge of rigorous probabilistic analysis of random networks. We show that the mean network computed with the Hamming distance is unable to capture the topology of the networks in the training sample, whereas the mean network computed using the effective resistance distance recovers the correct partitions and associated edge density. From a practical standpoint, our work informs the choice of metrics in the context where the sample Fr\'echet mean network is used to characterise the topology of networks for network-valued machine learning
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - A Theoretical View on Sparsely Activated Networks [21.156069843782017]
We present a formal model of data-dependent sparse networks that captures salient aspects of popular architectures.
We then introduce a routing function based on locality sensitive hashing (LSH) that enables us to reason about how well sparse networks approximate target functions.
We prove that sparse networks can match the approximation power of dense networks on Lipschitz functions.
arXiv Detail & Related papers (2022-08-08T23:14:48Z) - NOTMAD: Estimating Bayesian Networks with Sample-Specific Structures and
Parameters [70.55488722439239]
We present NOTMAD, which learns to mix archetypal networks according to sample context.
We demonstrate the utility of NOTMAD and sample-specific network inference through analysis and experiments, including patient-specific gene expression networks.
arXiv Detail & Related papers (2021-11-01T17:17:34Z) - Interpretable Network Representation Learning with Principal Component
Analysis [1.2183405753834557]
We consider the problem of interpretable network representation learning for samples of network-valued data.
We propose the Principal Component Analysis for Networks (PCAN) algorithm to identify statistically meaningful low-dimensional representations of a network sample.
We introduce a fast sampling-based algorithm, sPCAN, which is significantly more computationally efficient than its counterpart, but still enjoys advantages of interpretability.
arXiv Detail & Related papers (2021-06-27T13:52:49Z) - A Probabilistic Approach to Neural Network Pruning [20.001091112545065]
We theoretically study the performance of two pruning techniques (random and magnitude-based) on FCNs and CNNs.
The results establish that there exist pruned networks with expressive power within any specified bound from the target network.
arXiv Detail & Related papers (2021-05-20T23:19:43Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Learning low-rank latent mesoscale structures in networks [1.1470070927586016]
We present a new approach for describing low-rank mesoscale structures in networks.
We use several synthetic network models and empirical friendship, collaboration, and protein--protein interaction (PPI) networks.
We show how to denoise a corrupted network by using only the latent motifs that one learns directly from the corrupted network.
arXiv Detail & Related papers (2021-02-13T18:54:49Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Neural Network Tomography [4.407668482702675]
Network tomography is a classic research problem in the realm of network monitoring.
NeuTomography utilizes deep neural network and data augmentation to predict the unmeasured performance metrics.
NeuTomography can be employed to reconstruct the original network topology.
arXiv Detail & Related papers (2020-01-09T12:19:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.