Self-Supervised Learning with Noisy Dataset for Rydberg Microwave Sensors Denoising
- URL: http://arxiv.org/abs/2601.01924v1
- Date: Mon, 05 Jan 2026 09:16:09 GMT
- Title: Self-Supervised Learning with Noisy Dataset for Rydberg Microwave Sensors Denoising
- Authors: Zongkai Liu, Qiming Ren, Wenguang Yang, Yanjie Tong, Huizhen Wang, Yijie Zhang, Ruohao Zhi, Junyao Xie, Mingyong Jing, Hao Zhang, Liantuan Xiao, Suotang Jia, Ke Tang, Linjie Zhang,
- Abstract summary: We report a self-supervised deep learning framework for Rydberg sensors that enables single-shot noise suppression.<n>The framework eliminates the need for clean reference signals by training on two sets of noisy signals with identical statistical distributions.<n>When evaluated on Rydberg sensing datasets, the framework outperforms wavelet transform and Kalman filtering.
- Score: 12.897936741782011
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We report a self-supervised deep learning framework for Rydberg sensors that enables single-shot noise suppression matching the accuracy of multi-measurement averaging. The framework eliminates the need for clean reference signals (hardly required in quantum sensing) by training on two sets of noisy signals with identical statistical distributions. When evaluated on Rydberg sensing datasets, the framework outperforms wavelet transform and Kalman filtering, achieving a denoising effect equivalent to 10,000-set averaging while reducing computation time by three orders of magnitude. We further validate performance across diverse noise profiles and quantify the complexity-performance trade-off of U-Net and Transformer architectures, providing actionable guidance for optimizing deep learning-based denoising in Rydberg sensor systems.
Related papers
- Domain-Incremental Continual Learning for Robust and Efficient Keyword Spotting in Resource Constrained Systems [0.0]
Keywords Spotting systems with small footprint models deployed on edge devices face significant accuracy and robustness challenges.<n>We propose a comprehensive framework for continual learning designed to adapt to new domains while maintaining computational efficiency.<n>The proposed pipeline integrates a dual-input Convolutional Neural Network, utilizing both Mel Frequency Cepstral Coefficients (MFCC) and Mel-spectrogram features.
arXiv Detail & Related papers (2026-01-22T17:59:31Z) - Denoising by neural network for muzzle blast detection [0.23999111269325263]
Acoem develops gunshot detection systems, consisting of a microphone array and software that detects and locates shooters on the battlefield.<n>The performance of such systems is obviously affected by the acoustic environment in which they are operating.<n>To limit the influence of the acoustic environment, a neural network has been developed.
arXiv Detail & Related papers (2025-08-18T09:05:45Z) - Machine Unlearning for Robust DNNs: Attribution-Guided Partitioning and Neuron Pruning in Noisy Environments [5.8166742412657895]
Deep neural networks (DNNs) have achieved remarkable success across diverse domains, but their performance can be severely degraded by noisy or corrupted training data.<n>We propose a novel framework that integrates attribution-guided data partitioning, discriminative neuron pruning, and targeted fine-tuning to mitigate the impact of noisy samples.<n>Our framework achieves approximately a 10% absolute accuracy improvement over standard retraining on CIFAR-10 with injected label noise.
arXiv Detail & Related papers (2025-06-13T09:37:11Z) - A Hybrid Framework for Statistical Feature Selection and Image-Based Noise-Defect Detection [55.2480439325792]
This paper presents a hybrid framework that integrates both statistical feature selection and classification techniques to improve defect detection accuracy.<n>We present around 55 distinguished features that are extracted from industrial images, which are then analyzed using statistical methods.<n>By integrating these methods with flexible machine learning applications, the proposed framework improves detection accuracy and reduces false positives and misclassifications.
arXiv Detail & Related papers (2024-12-11T22:12:21Z) - Efficient Noise Mitigation for Enhancing Inference Accuracy in DNNs on Mixed-Signal Accelerators [4.416800723562206]
We model process-induced and aging-related variations of analog computing components on the accuracy of the analog neural networks.
We introduce a denoising block inserted between selected layers of a pre-trained model.
We demonstrate that training the denoising block significantly increases the model's robustness against various noise levels.
arXiv Detail & Related papers (2024-09-27T08:45:55Z) - Realistic Noise Synthesis with Diffusion Models [44.404059914652194]
Deep denoising models require extensive real-world training data, which is challenging to acquire.<n>We propose a novel Realistic Noise Synthesis Diffusor (RNSD) method using diffusion models to address these challenges.
arXiv Detail & Related papers (2023-05-23T12:56:01Z) - Improve Noise Tolerance of Robust Loss via Noise-Awareness [60.34670515595074]
We propose a meta-learning method which is capable of adaptively learning a hyper parameter prediction function, called Noise-Aware-Robust-Loss-Adjuster (NARL-Adjuster for brevity)
Four SOTA robust loss functions are attempted to be integrated with our algorithm, and comprehensive experiments substantiate the general availability and effectiveness of the proposed method in both its noise tolerance and performance.
arXiv Detail & Related papers (2023-01-18T04:54:58Z) - Neighborhood Collective Estimation for Noisy Label Identification and
Correction [92.20697827784426]
Learning with noisy labels (LNL) aims at designing strategies to improve model performance and generalization by mitigating the effects of model overfitting to noisy labels.
Recent advances employ the predicted label distributions of individual samples to perform noise verification and noisy label correction, easily giving rise to confirmation bias.
We propose Neighborhood Collective Estimation, in which the predictive reliability of a candidate sample is re-estimated by contrasting it against its feature-space nearest neighbors.
arXiv Detail & Related papers (2022-08-05T14:47:22Z) - S3: Supervised Self-supervised Learning under Label Noise [53.02249460567745]
In this paper we address the problem of classification in the presence of label noise.
In the heart of our method is a sample selection mechanism that relies on the consistency between the annotated label of a sample and the distribution of the labels in its neighborhood in the feature space.
Our method significantly surpasses previous methods on both CIFARCIFAR100 with artificial noise and real-world noisy datasets such as WebVision and ANIMAL-10N.
arXiv Detail & Related papers (2021-11-22T15:49:20Z) - Removing Noise from Extracellular Neural Recordings Using Fully
Convolutional Denoising Autoencoders [62.997667081978825]
We propose a Fully Convolutional Denoising Autoencoder, which learns to produce a clean neuronal activity signal from a noisy multichannel input.
The experimental results on simulated data show that our proposed method can improve significantly the quality of noise-corrupted neural signals.
arXiv Detail & Related papers (2021-09-18T14:51:24Z) - The potential of self-supervised networks for random noise suppression
in seismic data [0.0]
Blind-spot networks are shown to be an efficient suppressor of random noise in seismic data.
Results are compared with two commonly used random denoising techniques: FX-deconvolution and Curvelet transform.
We believe this is just the beginning of utilising self-supervised learning in seismic applications.
arXiv Detail & Related papers (2021-09-15T14:57:43Z) - Denoising Distantly Supervised Named Entity Recognition via a
Hypergeometric Probabilistic Model [26.76830553508229]
Hypergeometric Learning (HGL) is a denoising algorithm for distantly supervised named entity recognition.
HGL takes both noise distribution and instance-level confidence into consideration.
Experiments show that HGL can effectively denoise the weakly-labeled data retrieved from distant supervision.
arXiv Detail & Related papers (2021-06-17T04:01:25Z) - Training Classifiers that are Universally Robust to All Label Noise
Levels [91.13870793906968]
Deep neural networks are prone to overfitting in the presence of label noise.
We propose a distillation-based framework that incorporates a new subcategory of Positive-Unlabeled learning.
Our framework generally outperforms at medium to high noise levels.
arXiv Detail & Related papers (2021-05-27T13:49:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.