Behind the Noise: Conformal Quantile Regression Reveals Emergent Representations
- URL: http://arxiv.org/abs/2505.08176v1
- Date: Tue, 13 May 2025 02:27:12 GMT
- Title: Behind the Noise: Conformal Quantile Regression Reveals Emergent Representations
- Authors: Petrus H. Zwart, Tamas Varga, Odeta Qafoku, James A. Sethian,
- Abstract summary: We present a machine learning approach that denoises low-quality measurements with calibrated uncertainty bounds.<n>By using ensembles of lightweight, randomly structured neural networks trained via conformal quantile regression, our method performs reliable denoising.<n>We validate the approach on real-world geobiochemical imaging data, showing how it supports confident interpretation and guides experimental design.
- Score: 0.2445561610325265
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Scientific imaging often involves long acquisition times to obtain high-quality data, especially when probing complex, heterogeneous systems. However, reducing acquisition time to increase throughput inevitably introduces significant noise into the measurements. We present a machine learning approach that not only denoises low-quality measurements with calibrated uncertainty bounds, but also reveals emergent structure in the latent space. By using ensembles of lightweight, randomly structured neural networks trained via conformal quantile regression, our method performs reliable denoising while uncovering interpretable spatial and chemical features -- without requiring labels or segmentation. Unlike conventional approaches focused solely on image restoration, our framework leverages the denoising process itself to drive the emergence of meaningful representations. We validate the approach on real-world geobiochemical imaging data, showing how it supports confident interpretation and guides experimental design under resource constraints.
Related papers
- Diffusion-Based Limited-Angle CT Reconstruction under Noisy Conditions [10.287171164361608]
Missing angular projections lead to incomplete sinograms and artifacts in reconstructed images.<n>We propose a diffusion-based framework that completes missing angular views using a Mean-Reverting Differential Equation (MR-SDE) formulation.<n>To improve robustness under realistic noise, we propose a novel noise-aware mechanism that explicitly models inference-time uncertainty.
arXiv Detail & Related papers (2025-07-08T03:58:52Z) - Noisier2Inverse: Self-Supervised Learning for Image Reconstruction with Correlated Noise [1.099532646524593]
Noisier2Inverse is a correction-free self-supervised deep learning approach for general inverse prob- lems.<n>We numerically demonstrate that our method clearly outperforms previous self-supervised approaches that account for correlated noise.
arXiv Detail & Related papers (2025-03-25T08:59:11Z) - Denoising as Adaptation: Noise-Space Domain Adaptation for Image Restoration [64.84134880709625]
We show that it is possible to perform domain adaptation via the noise space using diffusion models.<n>In particular, by leveraging the unique property of how auxiliary conditional inputs influence the multi-step denoising process, we derive a meaningful diffusion loss.<n>We present crucial strategies such as channel-shuffling layer and residual-swapping contrastive learning in the diffusion model.
arXiv Detail & Related papers (2024-06-26T17:40:30Z) - Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression [58.618625678054826]
This study presents an enhanced neural compression method designed for optimal visual fidelity.
We have trained our model with a sophisticated semantic ensemble loss, integrating Charbonnier loss, perceptual loss, style loss, and a non-binary adversarial loss.
Our empirical findings demonstrate that this approach significantly improves the statistical fidelity of neural image compression.
arXiv Detail & Related papers (2024-01-25T08:11:27Z) - Hierarchical Disentangled Representation for Invertible Image Denoising
and Beyond [14.432771193620702]
Inspired by a latent observation that noise tends to appear in the high-frequency part of the image, we propose a fully invertible denoising method.
We decompose the noisy image into clean low-frequency and hybrid high-frequency parts with an invertible transformation.
In this way, denoising is made tractable by inversely merging noiseless low and high-frequency parts.
arXiv Detail & Related papers (2023-01-31T01:24:34Z) - Representing Noisy Image Without Denoising [91.73819173191076]
Fractional-order Moments in Radon space (FMR) is designed to derive robust representation directly from noisy images.
Unlike earlier integer-order methods, our work is a more generic design taking such classical methods as special cases.
arXiv Detail & Related papers (2023-01-18T10:13:29Z) - Stable Deep MRI Reconstruction using Generative Priors [13.400444194036101]
We propose a novel deep neural network based regularizer which is trained in a generative setting on reference magnitude images only.
The results demonstrate competitive performance, on par with state-of-the-art end-to-end deep learning methods.
arXiv Detail & Related papers (2022-10-25T08:34:29Z) - Deep Semantic Statistics Matching (D2SM) Denoising Network [70.01091467628068]
We introduce the Deep Semantic Statistics Matching (D2SM) Denoising Network.
It exploits semantic features of pretrained classification networks, then it implicitly matches the probabilistic distribution of clear images at the semantic feature space.
By learning to preserve the semantic distribution of denoised images, we empirically find our method significantly improves the denoising capabilities of networks.
arXiv Detail & Related papers (2022-07-19T14:35:42Z) - Quantifying Sources of Uncertainty in Deep Learning-Based Image
Reconstruction [5.129343375966527]
We propose a scalable and efficient framework to simultaneously quantify aleatoric and epistemic uncertainties in learned iterative image reconstruction.
We show that our method exhibits competitive performance against conventional benchmarks for computed tomography with both sparse view and limited angle data.
arXiv Detail & Related papers (2020-11-17T04:12:52Z) - Improving Blind Spot Denoising for Microscopy [73.94017852757413]
We present a novel way to improve the quality of self-supervised denoising.
We assume the clean image to be the result of a convolution with a point spread function (PSF) and explicitly include this operation at the end of our neural network.
arXiv Detail & Related papers (2020-08-19T13:06:24Z) - Reconstructing the Noise Manifold for Image Denoising [56.562855317536396]
We introduce the idea of a cGAN which explicitly leverages structure in the image noise space.
By learning directly a low dimensional manifold of the image noise, the generator promotes the removal from the noisy image only that information which spans this manifold.
Based on our experiments, our model substantially outperforms existing state-of-the-art architectures.
arXiv Detail & Related papers (2020-02-11T00:31:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.