Fitting Low-rank Models on Egocentrically Sampled Partial Networks
- URL: http://arxiv.org/abs/2303.11230v1
- Date: Thu, 9 Mar 2023 03:20:44 GMT
- Title: Fitting Low-rank Models on Egocentrically Sampled Partial Networks
- Authors: Angus Chan and Tianxi Li
- Abstract summary: We propose an approach to fit general low-rank models for egocentrically sampled networks.
This method offers the first theoretical guarantee for egocentric partial network estimation.
We evaluate the technique on several synthetic and real-world networks and show that it delivers competitive performance in link prediction tasks.
- Score: 4.111899441919165
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The statistical modeling of random networks has been widely used to uncover
interaction mechanisms in complex systems and to predict unobserved links in
real-world networks. In many applications, network connections are collected
via egocentric sampling: a subset of nodes is sampled first, after which all
links involving this subset are recorded; all other information is missing.
Compared with the assumption of ``uniformly missing at random", egocentrically
sampled partial networks require specially designed modeling strategies.
Current statistical methods are either computationally infeasible or based on
intuitive designs without theoretical justification. Here, we propose an
approach to fit general low-rank models for egocentrically sampled networks,
which include several popular network models. This method is based on graph
spectral properties and is computationally efficient for large-scale networks.
It results in consistent recovery of missing subnetworks due to egocentric
sampling for sparse networks. To our knowledge, this method offers the first
theoretical guarantee for egocentric partial network estimation in the scope of
low-rank models. We evaluate the technique on several synthetic and real-world
networks and show that it delivers competitive performance in link prediction
tasks.
Related papers
- How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Generalization and Estimation Error Bounds for Model-based Neural
Networks [78.88759757988761]
We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks.
We derive practical design rules that allow to construct model-based networks with guaranteed high generalization.
arXiv Detail & Related papers (2023-04-19T16:39:44Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - Stochastic Deep Networks with Linear Competing Units for Model-Agnostic
Meta-Learning [4.97235247328373]
This work addresses meta-learning (ML) by considering deep networks with local winner-takes-all (LWTA) activations.
This type of network units results in sparse representations from each model layer, as the units are organized into blocks where only one unit generates a non-zero output.
Our approach produces state-of-the-art predictive accuracy on few-shot image classification and regression experiments, as well as reduced predictive error on an active learning setting.
arXiv Detail & Related papers (2022-08-02T16:19:54Z) - An Approach for Link Prediction in Directed Complex Networks based on
Asymmetric Similarity-Popularity [0.0]
This paper introduces a link prediction method designed explicitly for directed networks.
It is based on the similarity-popularity paradigm, which has recently proven successful in undirected networks.
The algorithms approximate the hidden similarities as shortest path distances using edge weights that capture and factor out the links' asymmetry and nodes' popularity.
arXiv Detail & Related papers (2022-07-15T11:03:25Z) - Latent Network Embedding via Adversarial Auto-encoders [15.656374849760734]
We propose a latent network embedding model based on adversarial graph auto-encoders.
Under this framework, the problem of discovering latent structures is formulated as inferring the latent ties from partial observations.
arXiv Detail & Related papers (2021-09-30T16:49:46Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Community models for networks observed through edge nominations [6.442024233731203]
Communities are a common and widely studied structure in networks, typically under the assumption that the network is fully and correctly observed.
We propose a general model for a class of network sampling mechanisms based on recording edges via querying nodes.
We show community detection can be performed by spectral clustering under this general class of models.
arXiv Detail & Related papers (2020-08-09T04:53:13Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.