Approximation algorithms for noncommutative constraint satisfaction
problems
- URL: http://arxiv.org/abs/2312.16765v1
- Date: Thu, 28 Dec 2023 01:22:27 GMT
- Title: Approximation algorithms for noncommutative constraint satisfaction
problems
- Authors: Eric Culf, Hamoon Mousavi, and Taro Spirig
- Abstract summary: We study operator - or noncommutative - variants of constraint satisfaction problems (CSPs)
We introduce a framework for designing approximation algorithms for noncommutative CSPs.
This work is the first to establish approximation ratios for a broader class of noncommutative CSPs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We study operator - or noncommutative - variants of constraint satisfaction
problems (CSPs). These higher-dimensional variants are a core topic of
investigation in quantum information, where they arise as nonlocal games and
entangled multiprover interactive proof systems (MIP*). The idea of
higher-dimensional relaxations of CSPs is also important in the classical
literature. For example since the celebrated work of Goemans and Williamson on
Max-Cut, higher dimensional vector relaxations have been central in the design
of approximation algorithms for classical CSPs.
We introduce a framework for designing approximation algorithms for
noncommutative CSPs. Prior to this work Max-$2$-Lin$(k)$ was the only family of
noncommutative CSPs known to be efficiently solvable. This work is the first to
establish approximation ratios for a broader class of noncommutative CSPs.
In the study of classical CSPs, $k$-ary decision variables are often
represented by $k$-th roots of unity, which generalise to the noncommutative
setting as order-$k$ unitary operators. In our framework, using representation
theory, we develop a way of constructing unitary solutions from SDP
relaxations, extending the pioneering work of Tsirelson on XOR games. Then, we
introduce a novel rounding scheme to transform these solutions to order-$k$
unitaries. Our main technical innovation here is a theorem guaranteeing that,
for any set of unitary operators, there exists a set of order-$k$ unitaries
that closely mimics it. As an integral part of the rounding scheme, we prove a
random matrix theory result that characterises the distribution of the relative
angles between eigenvalues of random unitaries using tools from free
probability.
Related papers
- Sample-efficient Learning of Infinite-horizon Average-reward MDPs with General Function Approximation [53.17668583030862]
We study infinite-horizon average-reward Markov decision processes (AMDPs) in the context of general function approximation.
We propose a novel algorithmic framework named Local-fitted Optimization with OPtimism (LOOP)
We show that LOOP achieves a sublinear $tildemathcalO(mathrmpoly(d, mathrmsp(V*)) sqrtTbeta )$ regret, where $d$ and $beta$ correspond to AGEC and log-covering number of the hypothesis class respectively
arXiv Detail & Related papers (2024-04-19T06:24:22Z) - Variance Reduced Halpern Iteration for Finite-Sum Monotone Inclusions [18.086061048484616]
We study finite-sum monotone inclusion problems, which model broad classes of equilibrium problems.
Our main contributions are variants of the classical Halpern iteration that employ variance reduction to obtain improved complexity guarantees.
We argue that, up to poly-logarithmic factors, this complexity is unimprovable in the monotone Lipschitz setting.
arXiv Detail & Related papers (2023-10-04T17:24:45Z) - PDE-Based Optimal Strategy for Unconstrained Online Learning [40.61498562988079]
We present a framework that generates time-varying potential functions by solving a Partial Differential Equation (PDE)
Our framework recovers some classical potentials, and more importantly provides a systematic approach to design new ones.
This is the first parameter-free algorithm with optimal leading constant.
arXiv Detail & Related papers (2022-01-19T22:21:21Z) - Finite-Time Complexity of Online Primal-Dual Natural Actor-Critic
Algorithm for Constrained Markov Decision Processes [22.07834608976826]
We study an online primal-dual actor-critic method to solve a discounted cost constrained Markov decision process problem.
This paper is the first to study the finite-time complexity of an online primal-dual actor-critic method for solving a CMDP problem.
arXiv Detail & Related papers (2021-10-21T18:05:40Z) - Anti-Concentrated Confidence Bonuses for Scalable Exploration [57.91943847134011]
Intrinsic rewards play a central role in handling the exploration-exploitation trade-off.
We introduce emphanti-concentrated confidence bounds for efficiently approximating the elliptical bonus.
We develop a practical variant for deep reinforcement learning that is competitive with contemporary intrinsic rewards on Atari benchmarks.
arXiv Detail & Related papers (2021-10-21T15:25:15Z) - Permutation Compressors for Provably Faster Distributed Nonconvex
Optimization [68.8204255655161]
We show that the MARINA method of Gorbunov et al (2021) can be considered as a state-of-the-art method in terms of theoretical communication complexity.
Theory of MARINA to support the theory of potentially em correlated compressors, extends to the method beyond the classical independent compressors setting.
arXiv Detail & Related papers (2021-10-07T09:38:15Z) - Bayesian decision-making under misspecified priors with applications to
meta-learning [64.38020203019013]
Thompson sampling and other sequential decision-making algorithms are popular approaches to tackle explore/exploit trade-offs in contextual bandits.
We show that performance degrades gracefully with misspecified priors.
arXiv Detail & Related papers (2021-07-03T23:17:26Z) - Nonparametric approximation of conditional expectation operators [0.3655021726150368]
We investigate the approximation of the $L2$-operator defined by $[Pf](x) := mathbbE[ f(Y) mid X = x ]$ under minimal assumptions.
We prove that $P$ can be arbitrarily well approximated in operator norm by Hilbert-Schmidt operators acting on a reproducing kernel space.
arXiv Detail & Related papers (2020-12-23T19:06:12Z) - Large-Scale Methods for Distributionally Robust Optimization [53.98643772533416]
We prove that our algorithms require a number of evaluations gradient independent of training set size and number of parameters.
Experiments on MNIST and ImageNet confirm the theoretical scaling of our algorithms, which are 9--36 times more efficient than full-batch methods.
arXiv Detail & Related papers (2020-10-12T17:41:44Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.