Matrix Decomposition Perspective for Accuracy Assessment of Item
Response Theory
- URL: http://arxiv.org/abs/2203.03112v1
- Date: Mon, 7 Mar 2022 03:17:41 GMT
- Title: Matrix Decomposition Perspective for Accuracy Assessment of Item
Response Theory
- Authors: Hideo Hirose
- Abstract summary: This paper focuses on the performance of the reconstructed response matrix.
We have found that the performance of the singular value decomposition method and the matrix decomposition method is almost the same when the response matrix is a complete matrix.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The item response theory obtains the estimates and their confidence intervals
for parameters of abilities of examinees and difficulties of problems by using
the observed item response matrix consisting of 0/1 value elements. Many papers
discuss the performance of the estimates. However, this paper does not. Using
the maximum likelihood estimates, we can reconstruct the estimated item
response matrix. Then we can assess the accuracy of this reconstructed matrix
to the observed response matrix from the matrix decomposition perspective. That
is, this paper focuses on the performance of the reconstructed response matrix.
To compare the performance of the item response theory with others, we provided
the two kinds of low rank response matrix by approximating the observed
response matrix; one is the matrix via the singular value decomposition method
when the response matrix is a complete matrix, and the other is the matrix via
the matrix decomposition method when the response matrix is an incomplete
matrix. We have, firstly, found that the performance of the singular value
decomposition method and the matrix decomposition method is almost the same
when the response matrix is a complete matrix. Here, the performance is
measured by the closeness between the two matrices using the root mean squared
errors and the accuracy. Secondary, we have seen that the closeness of the
reconstructed matrix obtained from the item response theory to the observed
matrix is located between the two approximated low rank response matrices
obtained from the matrix decomposition method of k= and k=2 to the observed
matrix, where k indicates the first k columns use in the decomposed matrices.
Related papers
- Matrix decompositions in Quantum Optics: Takagi/Autonne,
Bloch-Messiah/Euler, Iwasawa, and Williamson [0.0]
We present four important matrix decompositions commonly used in quantum optics.
The first two of these decompositions are specialized versions of the singular-value decomposition.
The third factors any symplectic matrix in a unique way in terms of matrices that belong to different subgroups of the symplectic group.
arXiv Detail & Related papers (2024-03-07T15:43:17Z) - Mutually-orthogonal unitary and orthogonal matrices [6.9607365816307]
We show that the minimum and maximum numbers of an unextendible maximally entangled bases within a real two-qutrit system are three and four, respectively.
As an application in quantum information theory, we show that the minimum and maximum numbers of an unextendible maximally entangled bases within a real two-qutrit system are three and four, respectively.
arXiv Detail & Related papers (2023-09-20T08:20:57Z) - One-sided Matrix Completion from Two Observations Per Row [95.87811229292056]
We propose a natural algorithm that involves imputing the missing values of the matrix $XTX$.
We evaluate our algorithm on one-sided recovery of synthetic data and low-coverage genome sequencing.
arXiv Detail & Related papers (2023-06-06T22:35:16Z) - Quantum algorithms for matrix operations and linear systems of equations [65.62256987706128]
We propose quantum algorithms for matrix operations using the "Sender-Receiver" model.
These quantum protocols can be used as subroutines in other quantum schemes.
arXiv Detail & Related papers (2022-02-10T08:12:20Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - Adversarially-Trained Nonnegative Matrix Factorization [77.34726150561087]
We consider an adversarially-trained version of the nonnegative matrix factorization.
In our formulation, an attacker adds an arbitrary matrix of bounded norm to the given data matrix.
We design efficient algorithms inspired by adversarial training to optimize for dictionary and coefficient matrices.
arXiv Detail & Related papers (2021-04-10T13:13:17Z) - Deep Two-way Matrix Reordering for Relational Data Analysis [41.60125423028092]
Matrix reordering is a task to permute rows and columns of a given observed matrix.
We propose a new matrix reordering method, Deep Two-way Matrix Reordering (DeepTMR), using a neural network model.
We demonstrate the effectiveness of proposed DeepTMR by applying it to both synthetic and practical data sets.
arXiv Detail & Related papers (2021-03-26T01:31:24Z) - Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method [47.80060761046752]
Robust low-rank matrix completion (RMC) has been studied extensively for computer vision, signal processing and machine learning applications.
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
A widely used approach to tackle RMC is to consider a convex formulation, which minimizes the nuclear norm of the low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix (to promote sparsity).
In this paper, motivated by some recent works on low-
arXiv Detail & Related papers (2020-08-18T04:46:22Z) - J-matrix method of scattering in one dimension: The relativistic theory [0.0]
We make a relativistic extension of the one-dimensional J-matrix method of scattering.
The relativistic potential matrix is a combination of vector, scalar, and pseudo-scalar components.
arXiv Detail & Related papers (2020-01-14T19:02:15Z) - Relative Error Bound Analysis for Nuclear Norm Regularized Matrix Completion [101.83262280224729]
We develop a relative error bound for nuclear norm regularized matrix completion.
We derive a relative upper bound for recovering the best low-rank approximation of the unknown matrix.
arXiv Detail & Related papers (2015-04-26T13:12:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.