Unfolding by Folding: a resampling approach to the problem of matrix
inversion without actually inverting any matrix
- URL: http://arxiv.org/abs/2009.02913v1
- Date: Mon, 7 Sep 2020 07:20:45 GMT
- Title: Unfolding by Folding: a resampling approach to the problem of matrix
inversion without actually inverting any matrix
- Authors: Pietro Vischia
- Abstract summary: Matrix inversion problems are often encountered in experimental physics, and in particular in high-energy particle physics.
In this Manuscript, I take a different approach to the unfolding problem.
I sample many distributions in generator space, fold them through the original response matrix, and pick the generator-level distribution that yields the folded distribution closest to the data distribution.
- Score: 1.9641471892864126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Matrix inversion problems are often encountered in experimental physics, and
in particular in high-energy particle physics, under the name of unfolding. The
true spectrum of a physical quantity is deformed by the presence of a detector,
resulting in an observed spectrum. If we discretize both the true and observed
spectra into histograms, we can model the detector response via a matrix.
Inferring a true spectrum starting from an observed spectrum requires therefore
inverting the response matrix. Many methods exist in literature for this task,
all starting from the observed spectrum and using a simulated true spectrum as
a guide to obtain a meaningful solution in cases where the response matrix is
not easily invertible.
In this Manuscript, I take a different approach to the unfolding problem.
Rather than inverting the response matrix and transforming the observed
distribution into the most likely parent distribution in generator space, I
sample many distributions in generator space, fold them through the original
response matrix, and pick the generator-level distribution that yields the
folded distribution closest to the data distribution. Regularization schemes
can be introduced to treat the case where non-diagonal response matrices result
in high-frequency oscillations of the solution in true space, and the
introduced bias is studied.
The algorithm performs as well as traditional unfolding algorithms in cases
where the inverse problem is well-defined in terms of the discretization of the
true and smeared space, and outperforms them in cases where the inverse problem
is ill-defined---when the number of truth-space bins is larger than that of
smeared-space bins. These advantages stem from the fact that the algorithm does
not technically invert any matrix and uses only the data distribution as a
guide to choose the best solution.
Related papers
- On the phase diagram of extensive-rank symmetric matrix denoising beyond rotational invariance [5.058205542605482]
We make progress towards the understanding of matrix denoising when the hidden signal is a factored matrix $XXintercal$ that is not rotationally invariant.
We argue that it is only beyond the transition that factorisation, i.e., estimating $X$ itself, becomes possible up to sign and permutation universality.
We also argue that it is only beyond the transition that factorisation, i.e., estimating $X$ itself, becomes possible up to sign and permutation universality.
arXiv Detail & Related papers (2024-11-04T10:50:37Z) - Efficient conversion from fermionic Gaussian states to matrix product states [48.225436651971805]
We propose a highly efficient algorithm that converts fermionic Gaussian states to matrix product states.
It can be formulated for finite-size systems without translation invariance, but becomes particularly appealing when applied to infinite systems.
The potential of our method is demonstrated by numerical calculations in two chiral spin liquids.
arXiv Detail & Related papers (2024-08-02T10:15:26Z) - Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Learning Sparse High-Dimensional Matrix-Valued Graphical Models From Dependent Data [12.94486861344922]
We consider the problem of inferring the conditional independence graph (CIG) of a sparse, high-dimensional, stationary matrix- Gaussian time series.
We consider a sparse-based formulation of the problem with a Kronecker-decomposable power spectral density (PSD)
We illustrate our approach using numerical examples utilizing both synthetic and real data.
arXiv Detail & Related papers (2024-04-29T19:32:50Z) - Singular value decomposition based matrix surgery [0.0]
We develop a procedure to reduce and control the condition number of random matrices.
We investigate the effect on the persistent homology (PH) of point clouds of well- and ill-conditioned matrices.
arXiv Detail & Related papers (2023-02-22T15:30:08Z) - Trimmed Sampling Algorithm for the Noisy Generalized Eigenvalue Problem [0.0]
Solving the generalized eigenvalue problem is a useful method for finding energy eigenstates of large quantum systems.
The problem is especially bad when matrix elements are evaluated using methods and have significant error bars.
We introduce the trimmed sampling algorithm in order to solve this problem.
arXiv Detail & Related papers (2022-09-05T18:10:12Z) - Stochastic Parallelizable Eigengap Dilation for Large Graph Clustering [12.544602297450533]
We aim to identify clusters of nodes where most edges fall within clusters and only few edges fall between clusters.
A core step of spectral clustering is performing an eigendecomposition of the corresponding graph Laplacian matrix.
This paper introduces a parallelizable approach to dilating the spectrum in order to accelerate SVD solvers and in turn, spectral clustering.
arXiv Detail & Related papers (2022-07-29T10:13:07Z) - Sparse Quadratic Optimisation over the Stiefel Manifold with Application
to Permutation Synchronisation [71.27989298860481]
We address the non- optimisation problem of finding a matrix on the Stiefel manifold that maximises a quadratic objective function.
We propose a simple yet effective sparsity-promoting algorithm for finding the dominant eigenspace matrix.
arXiv Detail & Related papers (2021-09-30T19:17:35Z) - Linear-Sample Learning of Low-Rank Distributions [56.59844655107251]
We show that learning $ktimes k$, rank-$r$, matrices to normalized $L_1$ distance requires $Omega(frackrepsilon2)$ samples.
We propose an algorithm that uses $cal O(frackrepsilon2log2fracepsilon)$ samples, a number linear in the high dimension, and nearly linear in the matrices, typically low, rank proofs.
arXiv Detail & Related papers (2020-09-30T19:10:32Z) - Spectral Learning on Matrices and Tensors [74.88243719463053]
We show that tensor decomposition can pick up latent effects that are missed by matrix methods.
We also outline computational techniques to design efficient tensor decomposition methods.
arXiv Detail & Related papers (2020-04-16T22:53:00Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.