An operator-algebraic formulation of self-testing
- URL: http://arxiv.org/abs/2301.11291v2
- Date: Wed, 25 Oct 2023 19:15:33 GMT
- Title: An operator-algebraic formulation of self-testing
- Authors: Connor Paddock, William Slofstra, Yuming Zhao, and Yangchen Zhou
- Abstract summary: We give a new definition of self-testing for correlations in terms of states on $C*$-algebras.
For extremal binary correlations and for extremal synchronous correlations, we show that any self-test for projective models is a self-test for POVM models.
An advantage of our new definition is that it extends naturally to commuting operator models.
- Score: 2.115993069505241
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We give a new definition of self-testing for correlations in terms of states
on $C^*$-algebras. We show that this definition is equivalent to the standard
definition for any class of finite-dimensional quantum models which is closed,
provided that the correlation is extremal and has a full-rank model in the
class. This last condition automatically holds for the class of POVM quantum
models, but does not necessarily hold for the class of projective models by a
result of Baptista, Chen, Kaniewski, Lolck, Man{\v{c}}inska, Gabelgaard
Nielsen, and Schmidt. For extremal binary correlations and for extremal
synchronous correlations, we show that any self-test for projective models is a
self-test for POVM models. The question of whether there is a self-test for
projective models which is not a self-test for POVM models remains open.
An advantage of our new definition is that it extends naturally to commuting
operator models. We show that an extremal correlation is a self-test for
finite-dimensional quantum models if and only if it is a self-test for
finite-dimensional commuting operator models, and also observe that many known
finite-dimensional self-tests are in fact self-tests for infinite-dimensional
commuting operator models.
Related papers
- Self-Consistency of Large Language Models under Ambiguity [4.141513298907867]
This work presents an evaluation benchmark for self-consistency in cases of under-specification.
We conduct a series of behavioral experiments on the OpenAI model suite using an ambiguous integer sequence completion task.
We find that average consistency ranges from 67% to 82%, far higher than would be predicted if a model's consistency was random.
arXiv Detail & Related papers (2023-10-20T11:57:56Z) - Artificial neural networks and time series of counts: A class of
nonlinear INGARCH models [0.0]
It is shown how INGARCH models can be combined with artificial neural network (ANN) response functions to obtain a class of nonlinear INGARCH models.
The ANN framework allows for the interpretation of many existing INGARCH models as a degenerate version of a corresponding neural model.
The empirical analysis of time series of bounded and unbounded counts reveals that the neural INGARCH models are able to outperform reasonable degenerate competitor models in terms of the information loss.
arXiv Detail & Related papers (2023-04-03T14:26:16Z) - Learning to Increase the Power of Conditional Randomization Tests [8.883733362171032]
The model-X conditional randomization test is a generic framework for conditional independence testing.
We introduce novel model-fitting schemes that are designed to explicitly improve the power of model-X tests.
arXiv Detail & Related papers (2022-07-03T12:29:25Z) - Predicting Out-of-Distribution Error with the Projection Norm [87.61489137914693]
Projection Norm predicts a model's performance on out-of-distribution data without access to ground truth labels.
We find that Projection Norm is the only approach that achieves non-trivial detection performance on adversarial examples.
arXiv Detail & Related papers (2022-02-11T18:58:21Z) - Generative Temporal Difference Learning for Infinite-Horizon Prediction [101.59882753763888]
We introduce the $gamma$-model, a predictive model of environment dynamics with an infinite probabilistic horizon.
We discuss how its training reflects an inescapable tradeoff between training-time and testing-time compounding errors.
arXiv Detail & Related papers (2020-10-27T17:54:12Z) - Understanding Classifier Mistakes with Generative Models [88.20470690631372]
Deep neural networks are effective on supervised learning tasks, but have been shown to be brittle.
In this paper, we leverage generative models to identify and characterize instances where classifiers fail to generalize.
Our approach is agnostic to class labels from the training set which makes it applicable to models trained in a semi-supervised way.
arXiv Detail & Related papers (2020-10-05T22:13:21Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - A Kernel Stein Test for Comparing Latent Variable Models [48.32146056855925]
We propose a kernel-based nonparametric test of relative goodness of fit, where the goal is to compare two models, both of which may have unobserved latent variables.
We show that our test significantly outperforms the relative Maximum Mean Discrepancy test, which is based on samples from the models and does not exploit the latent structure.
arXiv Detail & Related papers (2019-07-01T07:46:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.