A General Pairwise Comparison Model for Extremely Sparse Networks
- URL: http://arxiv.org/abs/2002.08853v3
- Date: Thu, 10 Mar 2022 15:59:40 GMT
- Title: A General Pairwise Comparison Model for Extremely Sparse Networks
- Authors: Ruijian Han, Yiming Xu and Kani Chen
- Abstract summary: We show that the maximum likelihood estimator for the latent score vector of the subjects is uniformly consistent under a near-minimal condition on network sparsity.
Our results guarantee that the maximum likelihood estimator is justified for estimation in large-scale pairwise comparison networks.
- Score: 5.298287413134346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Statistical inference using pairwise comparison data is an effective approach
to analyzing large-scale sparse networks. In this paper, we propose a general
framework to model the mutual interactions in a network, which enjoys ample
flexibility in terms of model parametrization. Under this setup, we show that
the maximum likelihood estimator for the latent score vector of the subjects is
uniformly consistent under a near-minimal condition on network sparsity. This
condition is sharp in terms of the leading order asymptotics describing the
sparsity. Our analysis utilizes a novel chaining technique and illustrates an
important connection between graph topology and model consistency. Our results
guarantee that the maximum likelihood estimator is justified for estimation in
large-scale pairwise comparison networks where data are asymptotically
deficient. Simulation studies are provided in support of our theoretical
findings.
Related papers
- Statistical ranking with dynamic covariates [6.729750785106628]
We introduce an efficient alternating algorithm to compute the likelihood estimator (MLE)
A comprehensive numerical study is conducted to corroborate our theoretical findings and demonstrate the application of the proposed model to real-world datasets, including horse racing and tennis competitions.
arXiv Detail & Related papers (2024-06-24T10:26:05Z) - Statistical inference for pairwise comparison models [5.487882744996216]
This paper establishes a near-optimal normality for the maximum likelihood in a broad class of pairwise comparison models.
The key idea lies in identifying the Fisher information matrix as a weighted graph Laplacian, which can be studied via a meticulous spectral analysis.
arXiv Detail & Related papers (2024-01-16T16:14:09Z) - Towards a Better Theoretical Understanding of Independent Subnetwork Training [56.24689348875711]
We take a closer theoretical look at Independent Subnetwork Training (IST)
IST is a recently proposed and highly effective technique for solving the aforementioned problems.
We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication.
arXiv Detail & Related papers (2023-06-28T18:14:22Z) - Proximal Symmetric Non-negative Latent Factor Analysis: A Novel Approach
to Highly-Accurate Representation of Undirected Weighted Networks [2.1797442801107056]
Undirected Weighted Network (UWN) is commonly found in big data-related applications.
Existing models fail in either modeling its intrinsic symmetry or low-data density.
Proximal Symmetric Nonnegative Latent-factor-analysis model is proposed.
arXiv Detail & Related papers (2023-06-06T13:03:24Z) - Statistical Efficiency of Score Matching: The View from Isoperimetry [96.65637602827942]
We show a tight connection between statistical efficiency of score matching and the isoperimetric properties of the distribution being estimated.
We formalize these results both in the sample regime and in the finite regime.
arXiv Detail & Related papers (2022-10-03T06:09:01Z) - Distributed Nonparametric Estimation under Communication Constraints [0.0]
We provide a general framework for understanding the behavior of distributed estimation under communication constraints.
We derive minimax lower and matching upper bounds in the distributed regression, density estimation, classification, Poisson regression and volatility estimation models.
arXiv Detail & Related papers (2022-04-21T19:04:50Z) - The Interplay Between Implicit Bias and Benign Overfitting in Two-Layer
Linear Networks [51.1848572349154]
neural network models that perfectly fit noisy data can generalize well to unseen test data.
We consider interpolating two-layer linear neural networks trained with gradient flow on the squared loss and derive bounds on the excess risk.
arXiv Detail & Related papers (2021-08-25T22:01:01Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - Generalization Properties of Optimal Transport GANs with Latent
Distribution Learning [52.25145141639159]
We study how the interplay between the latent distribution and the complexity of the pushforward map affects performance.
Motivated by our analysis, we advocate learning the latent distribution as well as the pushforward map within the GAN paradigm.
arXiv Detail & Related papers (2020-07-29T07:31:33Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z) - Residual Correlation in Graph Neural Network Regression [39.54530450932135]
We show that conditional independence assumption severely limits predictive power.
We address this problem with an interpretable and efficient framework.
Our framework achieves substantially higher accuracy than competing baselines.
arXiv Detail & Related papers (2020-02-19T16:32:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.