Phase-space entropy at acquisition reflects downstream learnability
- URL: http://arxiv.org/abs/2512.19223v1
- Date: Mon, 22 Dec 2025 10:03:51 GMT
- Title: Phase-space entropy at acquisition reflects downstream learnability
- Authors: Xiu-Cheng Wang, Jun-Jie Zhanga, Nan Cheng, Long-Gang Pang, Taijiao Du, Deyu Meng,
- Abstract summary: We propose an acquisition-level scalar $S_mathcal B$ based on instrument-resolved phase space.<n>We show theoretically that (S_mathcal B) correctly identifies the phase-space coherence of periodic sampling.<n>$|S_mathcal B|$ consistently ranks sampling geometries and predicts downstream reconstruction/recognition difficulty emphwithout training.
- Score: 54.4100065023873
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Modern learning systems work with data that vary widely across domains, but they all ultimately depend on how much structure is already present in the measurements before any model is trained. This raises a basic question: is there a general, modality-agnostic way to quantify how acquisition itself preserves or destroys the information that downstream learners could use? Here we propose an acquisition-level scalar $ΔS_{\mathcal B}$ based on instrument-resolved phase space. Unlike pixelwise distortion or purely spectral errors that often saturate under aggressive undersampling, $ΔS_{\mathcal B}$ directly quantifies how acquisition mixes or removes joint space--frequency structure at the instrument scale. We show theoretically that \(ΔS_{\mathcal B}\) correctly identifies the phase-space coherence of periodic sampling as the physical source of aliasing, recovering classical sampling-theorem consequences. Empirically, across masked image classification, accelerated MRI, and massive MIMO (including over-the-air measurements), $|ΔS_{\mathcal B}|$ consistently ranks sampling geometries and predicts downstream reconstruction/recognition difficulty \emph{without training}. In particular, minimizing $|ΔS_{\mathcal B}|$ enables zero-training selection of variable-density MRI mask parameters that matches designs tuned by conventional pre-reconstruction criteria. These results suggest that phase-space entropy at acquisition reflects downstream learnability, enabling pre-training selection of candidate sampling policies and as a shared notion of information preservation across modalities.
Related papers
- Optimal Unconstrained Self-Distillation in Ridge Regression: Strict Improvements, Precise Asymptotics, and One-Shot Tuning [61.07540493350384]
Self-distillation (SD) is the process of retraining a student on a mixture of ground-truth and the teacher's own predictions.<n>We show that for any prediction risk, the optimally mixed student improves upon the ridge teacher for every regularization level.<n>We propose a consistent one-shot tuning method to estimate $star$ without grid search, sample splitting, or refitting.
arXiv Detail & Related papers (2026-02-19T17:21:15Z) - Analysis of Hessian Scaling for Local and Global Costs in Variational Quantum Algorithm [0.42970700836450487]
We quantify the entrywise resolvability of the Hessian for Variational Quantum Algorithms.<n>We show two distinct scaling regimes that govern the sample complexity required to resolve Hessian entries against shot noise.
arXiv Detail & Related papers (2026-01-31T15:49:23Z) - Coupled Data and Measurement Space Dynamics for Enhanced Diffusion Posterior Sampling [27.146380722473932]
Inverse problems, where the goal is to recover an unknown signal from noisy or incomplete measurements, are central to medical imaging, remote sensing, and computational biology.<n>We propose a novel framework called emphcoupled data and space diffusion posterior sampling (C-DPS), which eliminates the need for constraint tuning or likelihood measurement.<n>C-DPS consistently outperforms existing baselines, both qualitatively and quantitatively, across multiple inverse problem benchmarks.
arXiv Detail & Related papers (2025-10-08T18:59:16Z) - Post-Hoc Split-Point Self-Consistency Verification for Efficient, Unified Quantification of Aleatoric and Epistemic Uncertainty in Deep Learning [5.996056764788456]
Uncertainty quantification (UQ) is vital for trustworthy deep learning, yet existing methods are either computationally intensive or provide only partial, task-specific estimates.<n>We propose a post-hoc single-forward-pass framework that jointly captures aleatoric and epistemic uncertainty without modifying or retraining pretrained models.<n>Our method applies emphSplit-Point Analysis (SPA) to decompose predictive residuals into upper and lower subsets, computing emphMean Absolute Residuals (MARs) on each side.
arXiv Detail & Related papers (2025-09-16T17:16:01Z) - The Measure of Deception: An Analysis of Data Forging in Machine Unlearning [2.141079906482723]
Key challenge in verifying unlearning is forging.<n>A key challenge in verifying unlearning is forging, adversarially crafting data that mimics bounds of a target point.<n>We show that adversarial forging is fundamentally limited and that false unlearning claims can, in principle, be detected.
arXiv Detail & Related papers (2025-09-06T23:44:05Z) - Rao-Blackwell Gradient Estimators for Equivariant Denoising Diffusion [55.95767828747407]
In domains such as molecular and protein generation, physical systems exhibit inherent symmetries that are critical to model.<n>We present a framework that reduces training variance and provides a provably lower-variance gradient estimator.<n>We also present a practical implementation of this estimator incorporating the loss and sampling procedure through a method we call Orbit Diffusion.
arXiv Detail & Related papers (2025-02-14T03:26:57Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Reweighted Manifold Learning of Collective Variables from Enhanced Sampling Simulations [2.6009298669020477]
We provide a framework based on anisotropic diffusion maps for manifold learning.
We show that our framework reverts the biasing effect yielding CVs that correctly describe the equilibrium density.
We show that it can be used in many manifold learning techniques on data from both standard and enhanced sampling simulations.
arXiv Detail & Related papers (2022-07-29T08:59:56Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Towards Sample-Optimal Compressive Phase Retrieval with Sparse and
Generative Priors [59.33977545294148]
We show that $O(k log L)$ samples suffice to guarantee that the signal is close to any vector that minimizes an amplitude-based empirical loss function.
We adapt this result to sparse phase retrieval, and show that $O(s log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional.
arXiv Detail & Related papers (2021-06-29T12:49:54Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.