Phase retrieval with rank $d$ measurements -- \emph{descending} algorithms phase transitions
- URL: http://arxiv.org/abs/2506.18282v1
- Date: Mon, 23 Jun 2025 04:28:46 GMT
- Title: Phase retrieval with rank $d$ measurements -- \emph{descending} algorithms phase transitions
- Authors: Mihailo Stojnic,
- Abstract summary: [118] developed a powerful emphRandom duality theory (RDT) based analytical program.<n>We show how it can be utilized to handle rank $d$ positive definite phase retrieval (PR) measurements.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Companion paper [118] developed a powerful \emph{Random duality theory} (RDT) based analytical program to statistically characterize performance of \emph{descending} phase retrieval algorithms (dPR) (these include all variants of gradient descents and among them widely popular Wirtinger flows). We here generalize the program and show how it can be utilized to handle rank $d$ positive definite phase retrieval (PR) measurements (with special cases $d=1$ and $d=2$ serving as emulations of the real and complex phase retrievals, respectively). In particular, we observe that the minimal sample complexity ratio (number of measurements scaled by the dimension of the unknown signal) which ensures dPR's success exhibits a phase transition (PT) phenomenon. For both plain and lifted RDT we determine phase transitions locations. To complement theoretical results we implement a log barrier gradient descent variant and observe that, even in small dimensional scenarios (with problem sizes on the order of 100), the simulated phase transitions are in an excellent agreement with the theoretical predictions.
Related papers
- Phase transition of \emph{descending} phase retrieval algorithms [0.0]
We study theoretical limits of emphdescending phase retrieval algorithms.<n>We identify the concepts of emphparametric manifold and its emphfunneling points as key mathematical objects.
arXiv Detail & Related papers (2025-06-23T04:10:35Z) - Distributed Sign Momentum with Local Steps for Training Transformers [21.046099659465508]
Pre-training Transformer models are resource-intensive.<n>Recent studies have shown that sign momentum is an efficient technique for training large-scale deep learning models.<n>This paper investigates a novel communication momentum with multiple broad steps.
arXiv Detail & Related papers (2024-11-26T20:31:11Z) - Scalable approach to monitored quantum dynamics and entanglement phase transitions [0.0]
Measurement-induced entanglement phase transitions in monitored quantum circuits have stimulated activity in a diverse research community.
We present a solution by introducing a scalable protocol in $U(1)$ symmetric circuits.
arXiv Detail & Related papers (2024-06-27T09:59:58Z) - Stable Phase Retrieval with Mirror Descent [0.5312303275762104]
We show that mirror descent converges to a critical point of the phase retrieval problem.
We provide global convergence guarantees that ensure that with high probability, mirror descent converges to a global minimizer near the true vector.
arXiv Detail & Related papers (2024-05-17T13:07:14Z) - Adaptive Federated Learning Over the Air [108.62635460744109]
We propose a federated version of adaptive gradient methods, particularly AdaGrad and Adam, within the framework of over-the-air model training.
Our analysis shows that the AdaGrad-based training algorithm converges to a stationary point at the rate of $mathcalO( ln(T) / T 1 - frac1alpha ).
arXiv Detail & Related papers (2024-03-11T09:10:37Z) - Implicit Bias of Gradient Descent for Logistic Regression at the Edge of
Stability [69.01076284478151]
In machine learning optimization, gradient descent (GD) often operates at the edge of stability (EoS)
This paper studies the convergence and implicit bias of constant-stepsize GD for logistic regression on linearly separable data in the EoS regime.
arXiv Detail & Related papers (2023-05-19T16:24:47Z) - Embed to Control Partially Observed Systems: Representation Learning with Provable Sample Efficiency [105.17746223041954]
Reinforcement learning in partially observed Markov decision processes (POMDPs) faces two challenges.
It often takes the full history to predict the future, which induces a sample complexity that scales exponentially with the horizon.
We propose a reinforcement learning algorithm named Embed to Control (ETC), which learns the representation at two levels while optimizing the policy.
arXiv Detail & Related papers (2022-05-26T16:34:46Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Towards Sample-Optimal Compressive Phase Retrieval with Sparse and
Generative Priors [59.33977545294148]
We show that $O(k log L)$ samples suffice to guarantee that the signal is close to any vector that minimizes an amplitude-based empirical loss function.
We adapt this result to sparse phase retrieval, and show that $O(s log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional.
arXiv Detail & Related papers (2021-06-29T12:49:54Z) - Statistical Approach to Quantum Phase Estimation [62.92678804023415]
We introduce a new statistical and variational approach to the phase estimation algorithm (PEA)
Unlike the traditional and iterative PEAs which return only an eigenphase estimate, the proposed method can determine any unknown eigenstate-eigenphase pair.
We show the simulation results of the method with the Qiskit package on the IBM Q platform and on a local computer.
arXiv Detail & Related papers (2021-04-21T00:02:00Z) - On Convergence-Diagnostic based Step Sizes for Stochastic Gradient
Descent [24.042107117994046]
Constant step-size Gradient Descent exhibits two phases: a transient phase during which iterates make fast progress towards the optimum, followed by a stationary phase during which iterates oscillate around the optimal point.
We show that efficiently detecting this transition and appropriately decreasing the step size can lead to fast convergence rates.
arXiv Detail & Related papers (2020-07-01T14:58:01Z) - Analytic Signal Phase in $N-D$ by Linear Symmetry Tensor--fingerprint
modeling [69.35569554213679]
We show that the Analytic Signal phase, and its gradient have a hitherto unstudied discontinuity in $2-D $ and higher dimensions.
This shortcoming can result in severe artifacts whereas the problem does not exist in $1-D $ signals.
We suggest the use of Linear Symmetry phase, relying on more than one set of Gabor filters, but with a negligible computational add-on.
arXiv Detail & Related papers (2020-05-16T21:17:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.