Harnessing Data Augmentation to Quantify Uncertainty in the Early
Estimation of Single-Photon Source Quality
- URL: http://arxiv.org/abs/2306.15683v2
- Date: Tue, 9 Jan 2024 09:24:49 GMT
- Title: Harnessing Data Augmentation to Quantify Uncertainty in the Early
Estimation of Single-Photon Source Quality
- Authors: David Jacob Kedziora and Anna Musia{\l} and Wojciech Rudno-Rudzi\'nski
and Bogdan Gabrys
- Abstract summary: This study investigates the use of data augmentation, a machine learning technique, to supplement experimental data with bootstrapped samples.
Eight datasets obtained from measurements involving a single InGaAs/GaAs epitaxial quantum dot serve as a proof-of-principle example.
- Score: 8.397730500554047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Novel methods for rapidly estimating single-photon source (SPS) quality have
been promoted in recent literature to address the expensive and time-consuming
nature of experimental validation via intensity interferometry. However, the
frequent lack of uncertainty discussions and reproducible details raises
concerns about their reliability. This study investigates the use of data
augmentation, a machine learning technique, to supplement experimental data
with bootstrapped samples and quantify the uncertainty of such estimates. Eight
datasets obtained from measurements involving a single InGaAs/GaAs epitaxial
quantum dot serve as a proof-of-principle example. Analysis of one of the SPS
quality metrics derived from efficient histogram fitting of the synthetic
samples, i.e. the probability of multi-photon emission events, reveals
significant uncertainty contributed by stochastic variability in the Poisson
processes that describe detection rates. Ignoring this source of error risks
severe overconfidence in both early quality estimates and claims for
state-of-the-art SPS devices. Additionally, this study finds that standard
least-squares fitting is comparable to using a Poisson likelihood, and
expanding averages show some promise for early estimation. Also, reducing
background counts improves fitting accuracy but does not address the
Poisson-process variability. Ultimately, data augmentation demonstrates its
value in supplementing physical experiments; its benefit here is to emphasise
the need for a cautious assessment of SPS quality.
Related papers
- On Measuring Calibration of Discrete Probabilistic Neural Networks [3.120856767382004]
Training neural networks to fit high-dimensional probability distributions via maximum likelihood has become an effective method for uncertainty quantification.
Traditional metrics like Expected Error (ECE) and Negative Log Likelihood (NLL) have limitations.
This paper proposes a new approach using conditional kernel mean embeddings to measure calibration discrepancies without these biases and assumptions.
arXiv Detail & Related papers (2024-05-20T23:30:07Z) - Uncertainty-Calibrated Test-Time Model Adaptation without Forgetting [55.17761802332469]
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and test data by adapting a given model w.r.t. any test sample.
Prior methods perform backpropagation for each test sample, resulting in unbearable optimization costs to many applications.
We propose an Efficient Anti-Forgetting Test-Time Adaptation (EATA) method which develops an active sample selection criterion to identify reliable and non-redundant samples.
arXiv Detail & Related papers (2024-03-18T05:49:45Z) - LMD: Light-weight Prediction Quality Estimation for Object Detection in
Lidar Point Clouds [3.927702899922668]
Object detection on Lidar point cloud data is a promising technology for autonomous driving and robotics.
Uncertainty estimation is a crucial component for down-stream tasks and deep neural networks remain error-prone even for predictions with high confidence.
We propose LidarMetaDetect, a light-weight post-processing scheme for prediction quality estimation.
Our experiments show a significant increase of statistical reliability in separating true from false predictions.
arXiv Detail & Related papers (2023-06-13T15:13:29Z) - Mutual Wasserstein Discrepancy Minimization for Sequential
Recommendation [82.0801585843835]
We propose a novel self-supervised learning framework based on Mutual WasserStein discrepancy minimization MStein for the sequential recommendation.
We also propose a novel contrastive learning loss based on Wasserstein Discrepancy Measurement.
arXiv Detail & Related papers (2023-01-28T13:38:48Z) - Monotonicity and Double Descent in Uncertainty Estimation with Gaussian
Processes [52.92110730286403]
It is commonly believed that the marginal likelihood should be reminiscent of cross-validation metrics and that both should deteriorate with larger input dimensions.
We prove that by tuning hyper parameters, the performance, as measured by the marginal likelihood, improves monotonically with the input dimension.
We also prove that cross-validation metrics exhibit qualitatively different behavior that is characteristic of double descent.
arXiv Detail & Related papers (2022-10-14T08:09:33Z) - Nonparametric Empirical Bayes Estimation and Testing for Sparse and
Heteroscedastic Signals [5.715675926089834]
Large-scale modern data often involves estimation and testing for high-dimensional unknown parameters.
It is desirable to identify the sparse signals, the needles in the haystack'', with accuracy and false discovery control.
We propose a novel Spike-and-Nonparametric mixture prior (SNP) -- a spike to promote the sparsity and a nonparametric structure to capture signals.
arXiv Detail & Related papers (2021-06-16T15:55:44Z) - When in Doubt: Neural Non-Parametric Uncertainty Quantification for
Epidemic Forecasting [70.54920804222031]
Most existing forecasting models disregard uncertainty quantification, resulting in mis-calibrated predictions.
Recent works in deep neural models for uncertainty-aware time-series forecasting also have several limitations.
We model the forecasting task as a probabilistic generative process and propose a functional neural process model called EPIFNP.
arXiv Detail & Related papers (2021-06-07T18:31:47Z) - Stochastic Approximation for High-frequency Observations in Data
Assimilation [0.0]
High-frequency sensors offer opportunities for higher statistical accuracy of down-stream estimates, but their frequency results in a plethora of computational problems in data assimilation tasks.
We adapt approximation methods to address the unique challenges of high-frequency observations in data assimilation.
As a result, we are able to produce estimates that leverage all of the observations in a manner that avoids the aforementioned computational problems and preserves the statistical accuracy of the estimates.
arXiv Detail & Related papers (2020-11-05T06:02:27Z) - Uncertainty Quantification in Extreme Learning Machine: Analytical
Developments, Variance Estimates and Confidence Intervals [0.0]
Uncertainty quantification is crucial to assess prediction quality of a machine learning model.
Most methods proposed in the literature make strong assumptions on the data, ignore the randomness of input weights or neglect the bias contribution in confidence interval estimations.
This paper presents novel estimations that overcome these constraints and improve the understanding of ELM variability.
arXiv Detail & Related papers (2020-11-03T13:45:59Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Balance-Subsampled Stable Prediction [55.13512328954456]
We propose a novel balance-subsampled stable prediction (BSSP) algorithm based on the theory of fractional factorial design.
A design-theoretic analysis shows that the proposed method can reduce the confounding effects among predictors induced by the distribution shift.
Numerical experiments on both synthetic and real-world data sets demonstrate that our BSSP algorithm significantly outperforms the baseline methods for stable prediction across unknown test data.
arXiv Detail & Related papers (2020-06-08T07:01:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.