Testing for correlation between network structure and high-dimensional node covariates
- URL: http://arxiv.org/abs/2509.03772v1
- Date: Wed, 03 Sep 2025 23:33:17 GMT
- Title: Testing for correlation between network structure and high-dimensional node covariates
- Authors: Alexander Fuchs-Kreiss, Keith Levin,
- Abstract summary: In many application domains, networks are observed with node-level features.<n>In such settings, a common problem is to assess whether or not nodal covariates are correlated with the network structure itself.<n>We present four novel methods for addressing this problem.
- Score: 47.791962198275066
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In many application domains, networks are observed with node-level features. In such settings, a common problem is to assess whether or not nodal covariates are correlated with the network structure itself. Here, we present four novel methods for addressing this problem. Two of these are based on a linear model relating node-level covariates to latent node-level variables that drive network structure. The other two are based on applying canonical correlation analysis to the node features and network structure, avoiding the linear modeling assumptions. We provide theoretical guarantees for all four methods when the observed network is generated according to a low-rank latent space model endowed with node-level covariates, which we allow to be high-dimensional. Our methods are computationally cheaper and require fewer modeling assumptions than previous approaches to network dependency testing. We demonstrate and compare the performance of our novel methods on both simulated and real-world data.
Related papers
- Beyond Fixed Depth: Adaptive Graph Neural Networks for Node Classification Under Varying Homophily [10.0426843232642]
We develop a theoretical framework that links local structural and label characteristics to information propagation dynamics.<n>We propose a novel adaptive-depth GNN architecture that dynamically selects node-specific aggregation depths.<n>Our method seamlessly adapts to both homophilic and heterophilic patterns within a unified model.
arXiv Detail & Related papers (2025-11-10T01:37:51Z) - SimCalib: Graph Neural Network Calibration based on Similarity between
Nodes [60.92081159963772]
Graph neural networks (GNNs) have exhibited impressive performance in modeling graph data as exemplified in various applications.
We shed light on the relationship between GNN calibration and nodewise similarity via theoretical analysis.
A novel calibration framework, named SimCalib, is accordingly proposed to consider similarity between nodes at global and local levels.
arXiv Detail & Related papers (2023-12-19T04:58:37Z) - Backpropagation on Dynamical Networks [0.0]
We propose a network inference method based on the backpropagation through time (BPTT) algorithm commonly used to train recurrent neural networks.
An approximation of local node dynamics is first constructed using a neural network.
Freerun prediction performance with the resulting local models and weights was found to be comparable to the true system.
arXiv Detail & Related papers (2022-07-07T05:22:44Z) - Linear Connectivity Reveals Generalization Strategies [54.947772002394736]
Some pairs of finetuned models have large barriers of increasing loss on the linear paths between them.
We find distinct clusters of models which are linearly connected on the test loss surface, but are disconnected from models outside the cluster.
Our work demonstrates how the geometry of the loss surface can guide models towards different functions.
arXiv Detail & Related papers (2022-05-24T23:43:02Z) - On the Power of Gradual Network Alignment Using Dual-Perception
Similarities [14.779474659172923]
Network alignment (NA) is the task of finding the correspondence of nodes between two networks based on the network structure and node attributes.
Our study is motivated by the fact that, since most of existing NA methods have attempted to discover all node pairs at once, they do not harness information enriched through interim discovery of node correspondences.
We propose Grad-Align, a new NA method that gradually discovers node pairs by making full use of node pairs exhibiting strong consistency.
arXiv Detail & Related papers (2022-01-26T14:01:32Z) - Block Dense Weighted Networks with Augmented Degree Correction [1.2031796234206138]
We propose a new framework for generating and estimating dense weighted networks with potentially different connectivity patterns.
The proposed model relies on a particular class of functions which map individual node characteristics to the edges connecting those nodes.
We also develop a bootstrap methodology for generating new networks on the same set of vertices, which may be useful in circumstances where multiple data sets cannot be collected.
arXiv Detail & Related papers (2021-05-26T01:25:07Z) - Convexifying Sparse Interpolation with Infinitely Wide Neural Networks:
An Atomic Norm Approach [4.380224449592902]
This work examines the problem of exact data via sparse (neuron count), infinitely wide, single hidden layer neural networks with leaky rectified linear unit activations.
We derive simple characterizations of the convex hulls of the corresponding atomic sets for this problem under several different constraints on the weights and biases of the network.
A modest extension of our proposed framework to a binary classification problem is also presented.
arXiv Detail & Related papers (2020-07-15T21:40:51Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - The Heterogeneity Hypothesis: Finding Layer-Wise Differentiated Network
Architectures [179.66117325866585]
We investigate a design space that is usually overlooked, i.e. adjusting the channel configurations of predefined networks.
We find that this adjustment can be achieved by shrinking widened baseline networks and leads to superior performance.
Experiments are conducted on various networks and datasets for image classification, visual tracking and image restoration.
arXiv Detail & Related papers (2020-06-29T17:59:26Z) - Consistency of Spectral Clustering on Hierarchical Stochastic Block
Models [5.983753938303726]
We study the hierarchy of communities in real-world networks under a generic block model.
We prove the strong consistency of this method under a wide range of model parameters.
Unlike most of existing work, our theory covers multiscale networks where the connection probabilities may differ by orders of magnitude.
arXiv Detail & Related papers (2020-04-30T01:08:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.