Network Topology Inference from Smooth Signals Under Partial Observability
- URL: http://arxiv.org/abs/2410.05707v2
- Date: Sat, 19 Oct 2024 14:54:34 GMT
- Title: Network Topology Inference from Smooth Signals Under Partial Observability
- Authors: Chuansen Peng, Hanning Tang, Zhiguo Wang, Xiaojing Shen,
- Abstract summary: Inferring network topologies from smooth signals with partially observed nodes is a significant problem in data science and engineering.
We propose a first-order algorithmic framework that includes two variants: one based on column sparsity regularization and the other on a low-rank constraint.
We establish theoretical convergence guarantees and demonstrate the linear convergence rate of our algorithms.
- Score: 17.491343739469627
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inferring network topology from smooth signals is a significant problem in data science and engineering. A common challenge in real-world scenarios is the availability of only partially observed nodes. While some studies have considered hidden nodes and proposed various optimization frameworks, existing methods often lack the practical efficiency needed for large-scale networks or fail to provide theoretical convergence guarantees. In this paper, we address the problem of inferring network topologies from smooth signals with partially observed nodes. We propose a first-order algorithmic framework that includes two variants: one based on column sparsity regularization and the other on a low-rank constraint. We establish theoretical convergence guarantees and demonstrate the linear convergence rate of our algorithms. Extensive experiments on both synthetic and real-world data show that our results align with theoretical predictions, exhibiting not only linear convergence but also superior speed compared to existing methods. To the best of our knowledge, this is the first work to propose a first-order algorithmic framework for inferring network structures from smooth signals under partial observability, offering both guaranteed linear convergence and practical effectiveness for large-scale networks.
Related papers
- Interacting Particle Systems on Networks: joint inference of the network
and the interaction kernel [8.535430501710712]
We infer the weight matrix of the network and systems which determine the rules of the interactions between agents.
We use two algorithms: one is on a new algorithm named operator regression with alternating least squares of data.
Both algorithms are scalable conditions guaranteeing identifiability and well-posedness.
arXiv Detail & Related papers (2024-02-13T12:29:38Z) - Accelerating Scalable Graph Neural Network Inference with Node-Adaptive
Propagation [80.227864832092]
Graph neural networks (GNNs) have exhibited exceptional efficacy in a diverse array of applications.
The sheer size of large-scale graphs presents a significant challenge to real-time inference with GNNs.
We propose an online propagation framework and two novel node-adaptive propagation methods.
arXiv Detail & Related papers (2023-10-17T05:03:00Z) - No Wrong Turns: The Simple Geometry Of Neural Networks Optimization
Paths [12.068608358926317]
First-order optimization algorithms are known to efficiently locate favorable minima in deep neural networks.
We focus on the fundamental geometric properties of sampled quantities of optimization on two key paths.
Our findings suggest that not only do optimization trajectories never encounter significant obstacles, but they also maintain stable dynamics during the majority of training.
arXiv Detail & Related papers (2023-06-20T22:10:40Z) - Towards Higher-order Topological Consistency for Unsupervised Network
Alignment [41.763907024585926]
We propose a fully unsupervised network alignment framework named HTC.
The proposed higher-order topological consistency is formulated based on edge orbits.
The encoder is trained to be multi-orbit-aware and then be refined to identify more trusted anchor links.
arXiv Detail & Related papers (2022-08-26T07:09:13Z) - Higher-order accurate two-sample network inference and network hashing [13.984114642035692]
Two-sample hypothesis testing for network comparison presents many significant challenges.
We develop a comprehensive toolbox featuring a novel main method and its variants.
Our method outperforms existing tools in speed and accuracy, and it is proved power-optimal.
arXiv Detail & Related papers (2022-08-16T07:31:11Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - An Unbiased Symmetric Matrix Estimator for Topology Inference under
Partial Observability [16.60607849384252]
This letter considers the problem of network topology inference under the framework of partial observability.
We propose a novel unbiased estimator for the symmetric network topology with the Gaussian noise and the Laplacian combination rule.
An effective algorithm called network inference Gauss algorithm is developed to infer the network structure.
arXiv Detail & Related papers (2022-03-29T12:49:25Z) - The Sample Complexity of One-Hidden-Layer Neural Networks [57.6421258363243]
We study a class of scalar-valued one-hidden-layer networks, and inputs bounded in Euclidean norm.
We prove that controlling the spectral norm of the hidden layer weight matrix is insufficient to get uniform convergence guarantees.
We analyze two important settings where a mere spectral norm control turns out to be sufficient.
arXiv Detail & Related papers (2022-02-13T07:12:02Z) - Unified Field Theory for Deep and Recurrent Neural Networks [56.735884560668985]
We present a unified and systematic derivation of the mean-field theory for both recurrent and deep networks.
We find that convergence towards the mean-field theory is typically slower for recurrent networks than for deep networks.
Our method exposes that Gaussian processes are but the lowest order of a systematic expansion in $1/n$.
arXiv Detail & Related papers (2021-12-10T15:06:11Z) - A Convergence Theory Towards Practical Over-parameterized Deep Neural
Networks [56.084798078072396]
We take a step towards closing the gap between theory and practice by significantly improving the known theoretical bounds on both the network width and the convergence time.
We show that convergence to a global minimum is guaranteed for networks with quadratic widths in the sample size and linear in their depth at a time logarithmic in both.
Our analysis and convergence bounds are derived via the construction of a surrogate network with fixed activation patterns that can be transformed at any time to an equivalent ReLU network of a reasonable size.
arXiv Detail & Related papers (2021-01-12T00:40:45Z) - Parallelization Techniques for Verifying Neural Networks [52.917845265248744]
We introduce an algorithm based on the verification problem in an iterative manner and explore two partitioning strategies.
We also introduce a highly parallelizable pre-processing algorithm that uses the neuron activation phases to simplify the neural network verification problems.
arXiv Detail & Related papers (2020-04-17T20:21:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.