Extension of Saaty's inconsistency index to incomplete comparisons:
Approximated thresholds
- URL: http://arxiv.org/abs/2102.10558v1
- Date: Sun, 21 Feb 2021 08:39:37 GMT
- Title: Extension of Saaty's inconsistency index to incomplete comparisons:
Approximated thresholds
- Authors: Kolos Csaba \'Agoston and L\'aszl\'o Csat\'o
- Abstract summary: This paper generalises the inconsistency index proposed by Saaty to incomplete pairwise comparison matrices.
The extension is based on the approach of filling the missing elements to minimise the eigenvalue of the incomplete matrix.
Our results can be used by practitioners as a statistical criterion for accepting/rejecting an incomplete pairwise comparison matrix.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pairwise comparison matrices are increasingly used in settings where some
pairs are missing. However, there exist few inconsistency indices to analyse
such incomplete data sets and even fewer measures have an associated threshold.
This paper generalises the inconsistency index proposed by Saaty to incomplete
pairwise comparison matrices. The extension is based on the approach of filling
the missing elements to minimise the eigenvalue of the incomplete matrix. It
means that the well-established values of the random index, a crucial component
of the consistency ratio for which the famous threshold of 0.1 provides the
condition for the acceptable level of inconsistency, cannot be directly
adopted. The inconsistency of random matrices turns out to be the function of
matrix size and the number of missing elements, with a nearly linear dependence
in the case of the latter variable. Our results can be directly used by
practitioners as a statistical criterion for accepting/rejecting an incomplete
pairwise comparison matrix.
Related papers
- Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Sublinear Time Approximation of Text Similarity Matrices [50.73398637380375]
We introduce a generalization of the popular Nystr"om method to the indefinite setting.
Our algorithm can be applied to any similarity matrix and runs in sublinear time in the size of the matrix.
We show that our method, along with a simple variant of CUR decomposition, performs very well in approximating a variety of similarity matrices.
arXiv Detail & Related papers (2021-12-17T17:04:34Z) - Adversarially-Trained Nonnegative Matrix Factorization [77.34726150561087]
We consider an adversarially-trained version of the nonnegative matrix factorization.
In our formulation, an attacker adds an arbitrary matrix of bounded norm to the given data matrix.
We design efficient algorithms inspired by adversarial training to optimize for dictionary and coefficient matrices.
arXiv Detail & Related papers (2021-04-10T13:13:17Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Adversarial Robust Low Rank Matrix Estimation: Compressed Sensing and Matrix Completion [2.0257616108612373]
We deal with matrix compressed sensing, including lasso as a partial problem, and matrix completion.
We propose a simple unified approach based on a combination of the Huber loss function and the nuclear norm penalization.
arXiv Detail & Related papers (2020-10-25T02:32:07Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Matrix Completion with Quantified Uncertainty through Low Rank Gaussian
Copula [30.84155327760468]
This paper proposes a framework for missing value imputation with quantified uncertainty.
The time required to fit the model scales linearly with the number of rows and the number of columns in the dataset.
Empirical results show the method yields state-of-the-art imputation accuracy across a wide range of data types.
arXiv Detail & Related papers (2020-06-18T19:51:42Z) - Median Matrix Completion: from Embarrassment to Optimality [16.667260586938234]
We consider matrix completion with absolute deviation loss and obtain an estimator of the median matrix.
Despite several appealing properties of median, the non-smooth absolute deviation loss leads to computational challenge.
We propose a novel refinement step, which turns such inefficient estimators into a rate (near-optimal) matrix completion procedure.
arXiv Detail & Related papers (2020-06-18T10:01:22Z) - Covariance Estimation for Matrix-valued Data [9.739753590548796]
We propose a class of distribution-free regularized covariance estimation methods for high-dimensional matrix data.
We formulate a unified framework for estimating bandable covariance, and introduce an efficient algorithm based on rank one unconstrained Kronecker product approximation.
We demonstrate the superior finite-sample performance of our methods using simulations and real applications from a gridded temperature anomalies dataset and a S&P 500 stock data analysis.
arXiv Detail & Related papers (2020-04-11T02:15:26Z) - Relative Error Bound Analysis for Nuclear Norm Regularized Matrix Completion [101.83262280224729]
We develop a relative error bound for nuclear norm regularized matrix completion.
We derive a relative upper bound for recovering the best low-rank approximation of the unknown matrix.
arXiv Detail & Related papers (2015-04-26T13:12:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.