Machine learning meets the CHSH scenario
- URL: http://arxiv.org/abs/2407.14396v1
- Date: Fri, 19 Jul 2024 15:16:31 GMT
- Title: Machine learning meets the CHSH scenario
- Authors: Gabriel Pereira Alves, Nicolas Gigena, Jędrzej Kaniewski,
- Abstract summary: We focus on assessing the usefulness and effectiveness of the machine learning (ML) approach.
We consider a wide selection of approaches, ranging from simple data science models to dense neural networks.
We conclude that while it is relatively easy to achieve good performance on average, it is hard to train a model that performs well on the "hard" cases.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we perform a comprehensive study of the machine learning (ML) methods for the purpose of characterising the quantum set of correlations. As our main focus is on assessing the usefulness and effectiveness of the ML approach, we focus exclusively on the CHSH scenario, both the 4-dimensional variant, for which an analytical solution is known, and the 8-dimensional variant, for which no analytical solution is known, but numerical approaches are relatively well understood. We consider a wide selection of approaches, ranging from simple data science models to dense neural networks. The two classes of models that perform well are support vector machines and dense neural networks, and they are the main focus of this work. We conclude that while it is relatively easy to achieve good performance on average, it is hard to train a model that performs well on the "hard" cases, i.e., points in the vicinity of the boundary of the quantum set. Sadly, these are precisely the cases which are interesting from the academic point of view. In order to improve performance on hard cases one must, especially for the 8-dimensional problem, resort to a tailored choice of training data, which means that we are implicitly feeding our intuition and biases into the model. We feel that this is an important and often overlooked aspect of applying ML models to academic problems, where data generation or data selection is performed according to some implicit subjective criteria. In this way, it is possible to unconsciously steer our model, so that it exhibits features that we are interested in seeing. Hence, special care must be taken while determining whether ML methods can be considered objective and unbiased in the context of academic problems.
Related papers
- Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Frugal Reinforcement-based Active Learning [12.18340575383456]
We propose a novel active learning approach for label-efficient training.
The proposed method is iterative and aims at minimizing a constrained objective function that mixes diversity, representativity and uncertainty criteria.
We also introduce a novel weighting mechanism based on reinforcement learning, which adaptively balances these criteria at each training iteration.
arXiv Detail & Related papers (2022-12-09T14:17:45Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Quantifying Human Bias and Knowledge to guide ML models during Training [0.0]
We introduce an experimental approach to dealing with skewed datasets by including humans in the training process.
We ask humans to rank the importance of features of the dataset, and through rank aggregation, determine the initial weight bias for the model.
We show that collective human bias can allow ML models to learn insights about the true population instead of the biased sample.
arXiv Detail & Related papers (2022-11-19T20:49:07Z) - Masked prediction tasks: a parameter identifiability view [49.533046139235466]
We focus on the widely used self-supervised learning method of predicting masked tokens.
We show that there is a rich landscape of possibilities, out of which some prediction tasks yield identifiability, while others do not.
arXiv Detail & Related papers (2022-02-18T17:09:32Z) - CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep
Learning [55.733193075728096]
Modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance.
Sample re-weighting methods are popularly used to alleviate this data bias issue.
We propose a meta-model capable of adaptively learning an explicit weighting scheme directly from data.
arXiv Detail & Related papers (2022-02-11T13:49:51Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Towards Explainable Exploratory Landscape Analysis: Extreme Feature
Selection for Classifying BBOB Functions [4.932130498861987]
We show that a surprisingly small number of features -- often less than four -- can suffice to achieve a 98% accuracy.
We show that the classification accuracy transfers to settings in which several instances are involved in training and testing.
arXiv Detail & Related papers (2021-02-01T10:04:28Z) - On the Treatment of Optimization Problems with L1 Penalty Terms via
Multiobjective Continuation [0.0]
We present a novel algorithm that allows us to gain detailed insight into the effects of sparsity in linear and nonlinear optimization.
Our method can be seen as a generalization of well-known homotopy methods for linear regression problems to the nonlinear case.
arXiv Detail & Related papers (2020-12-14T13:00:50Z) - Insights into Performance Fitness and Error Metrics for Machine Learning [1.827510863075184]
Machine learning (ML) is the field of training machines to achieve high level of cognition and perform human-like analysis.
This paper examines a number of the most commonly-used performance fitness and error metrics for regression and classification algorithms.
arXiv Detail & Related papers (2020-05-17T22:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.