Asymptotically Optimal Adversarial Strategies for the Probability
Estimation Framework
- URL: http://arxiv.org/abs/2306.06802v1
- Date: Sun, 11 Jun 2023 22:58:01 GMT
- Title: Asymptotically Optimal Adversarial Strategies for the Probability
Estimation Framework
- Authors: Soumyadip Patra and Peter Bierhorst
- Abstract summary: We present a self-contained proof of the optimality of the PEF method for certifying randomness in quantum non-locality experiments.
We apply these results to the (2,2,2) Bell scenario, obtaining an analytic characterisation of the optimal adversarial attacks bound by no-signalling principles.
We also study extensions of the analysis to quantum-limited adversaries in the (2,2,2) Bell scenario and no-signalling adversaries in higher $(n,m,k)$ Bell scenarios.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Probability Estimation Framework involves direct estimation of the
probability of occurrences of outcomes conditioned on measurement settings and
side information. It is a powerful tool for certifying randomness in quantum
non-locality experiments. In this paper, we present a self-contained proof of
the asymptotic optimality of the method. Our approach refines earlier results
to allow a better characterisation of optimal adversarial attacks on the
protocol. We apply these results to the (2,2,2) Bell scenario, obtaining an
analytic characterisation of the optimal adversarial attacks bound by
no-signalling principles, while also demonstrating the asymptotic robustness of
the PEF method to deviations from expected experimental behaviour. We also
study extensions of the analysis to quantum-limited adversaries in the (2,2,2)
Bell scenario and no-signalling adversaries in higher $(n,m,k)$ Bell scenarios.
Related papers
- Tighter Performance Theory of FedExProx [85.92481138826949]
We revisit FedExProx - a recently proposed distributed optimization method designed to enhance convergence properties of parallel algorithms via extrapolation.
We develop a novel analysis framework, establishing a tighter linear convergence rate for non-strongly convex quadratic problems.
We extend the applicability of our analysis to general functions satisfying the Polyak-Lojasiewicz condition, outperforming the previous strongly convex analysis.
arXiv Detail & Related papers (2024-10-20T11:53:25Z) - Optimizing Probabilistic Conformal Prediction with Vectorized Non-Conformity Scores [6.059745771017814]
We propose a novel framework that enhances efficiency by first vectorizing the non-conformity scores with ranked samples and then optimizing the shape of the prediction set by varying the quantiles for samples at the same rank.
Our method delivers valid coverage while producing discontinuous and more efficient prediction sets, making it particularly suited for high-stakes applications.
arXiv Detail & Related papers (2024-10-17T16:37:03Z) - Probabilistic Conformal Prediction with Approximate Conditional Validity [81.30551968980143]
We develop a new method for generating prediction sets that combines the flexibility of conformal methods with an estimate of the conditional distribution.
Our method consistently outperforms existing approaches in terms of conditional coverage.
arXiv Detail & Related papers (2024-07-01T20:44:48Z) - Transductive Active Learning: Theory and Applications [35.49225932333298]
We study a generalization of classical active learning to real-world settings with concrete prediction targets.
We analyze a family of decision rules that sample adaptively to minimize uncertainty about prediction targets.
arXiv Detail & Related papers (2024-02-13T09:22:45Z) - Variational Inference with Coverage Guarantees in Simulation-Based Inference [18.818573945984873]
We propose Conformalized Amortized Neural Variational Inference (CANVI)
CANVI constructs conformalized predictors based on each candidate, compares the predictors using a metric known as predictive efficiency, and returns the most efficient predictor.
We prove lower bounds on the predictive efficiency of the regions produced by CANVI and explore how the quality of a posterior approximation relates to the predictive efficiency of prediction regions based on that approximation.
arXiv Detail & Related papers (2023-05-23T17:24:04Z) - Optimal variance-reduced stochastic approximation in Banach spaces [114.8734960258221]
We study the problem of estimating the fixed point of a contractive operator defined on a separable Banach space.
We establish non-asymptotic bounds for both the operator defect and the estimation error.
arXiv Detail & Related papers (2022-01-21T02:46:57Z) - Heavy-tailed Streaming Statistical Estimation [58.70341336199497]
We consider the task of heavy-tailed statistical estimation given streaming $p$ samples.
We design a clipped gradient descent and provide an improved analysis under a more nuanced condition on the noise of gradients.
arXiv Detail & Related papers (2021-08-25T21:30:27Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Sequential Domain Adaptation by Synthesizing Distributionally Robust
Experts [14.656957226255628]
Supervised domain adaptation aims to improve the predictive accuracy by exploiting additional labeled training samples from a source distribution close to the target distribution.
We use the Bernstein online aggregation algorithm on the proposed family of robust experts to generate predictions for the sequential stream of target samples.
arXiv Detail & Related papers (2021-06-01T08:51:55Z) - Finite Sample Analysis of Minimax Offline Reinforcement Learning:
Completeness, Fast Rates and First-Order Efficiency [83.02999769628593]
We offer a theoretical characterization of off-policy evaluation (OPE) in reinforcement learning.
We show that the minimax approach enables us to achieve a fast rate of convergence for weights and quality functions.
We present the first finite-sample result with first-order efficiency in non-tabular environments.
arXiv Detail & Related papers (2021-02-05T03:20:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.