On the separation of correlation-assisted sum capacities of multiple
access channels
- URL: http://arxiv.org/abs/2205.13538v3
- Date: Thu, 3 Aug 2023 05:58:24 GMT
- Title: On the separation of correlation-assisted sum capacities of multiple
access channels
- Authors: Akshay Seshadri, Felix Leditzky, Vikesh Siddhu, Graeme Smith
- Abstract summary: We show that it is possible to compute the sum capacity of a family of MACs with multiple senders.
In the second part, we showcase algorithms for non-local optimization of functions we call Lipschitz-like functions.
In the third part, we show that one can use techniques to compute the sum capacity of an two-enders to a fixed precision of quasi-nomial time.
- Score: 13.572917264310119
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The capacity of a channel characterizes the maximum rate at which information
can be transmitted through the channel asymptotically faithfully. For a channel
with multiple senders and a single receiver, computing its sum capacity is
possible in theory, but challenging in practice because of the nonconvex
optimization involved. To address this challenge, we investigate three topics
in our study. In the first part, we study the sum capacity of a family of
multiple access channels (MACs) obtained from nonlocal games. For any MAC in
this family, we obtain an upper bound on the sum rate that depends only on the
properties of the game when allowing assistance from an arbitrary set of
correlations between the senders. This approach can be used to prove
separations between sum capacities when the senders are allowed to share
different sets of correlations, such as classical, quantum or no-signalling
correlations. We also construct a specific nonlocal game to show that the
approach of bounding the sum capacity by relaxing the nonconvex optimization
can give arbitrarily loose bounds. Owing to this result, in the second part, we
study algorithms for non-convex optimization of a class of functions we call
Lipschitz-like functions. This class includes entropic quantities, and hence
these results may be of independent interest in information theory.
Subsequently, in the third part, we show that one can use these techniques to
compute the sum capacity of an arbitrary two-sender MACs to a fixed additive
precision in quasi-polynomial time. We showcase our method by efficiently
computing the sum capacity of a family of two-sender MACs for which one of the
input alphabets has size two. Furthermore, we demonstrate with an example that
our algorithm may compute the sum capacity to a higher precision than using the
convex relaxation.
Related papers
- One-shot Multiple Access Channel Simulation [9.271640666465364]
We consider the problem of shared randomness-assisted multiple access channel (MAC) simulation for product inputs.
We characterize the one-shot communication cost region via almost-matching inner and outer bounds in terms of the smooth max-information of the channel.
arXiv Detail & Related papers (2024-10-22T17:18:38Z) - Polynomial-Time Solutions for ReLU Network Training: A Complexity
Classification via Max-Cut and Zonotopes [70.52097560486683]
We prove that the hardness of approximation of ReLU networks not only mirrors the complexity of the Max-Cut problem but also, in certain special cases, exactly corresponds to it.
In particular, when $epsilonleqsqrt84/83-1approx 0.006$, we show that it is NP-hard to find an approximate global dataset of the ReLU network objective with relative error $epsilon$ with respect to the objective value.
arXiv Detail & Related papers (2023-11-18T04:41:07Z) - Fixed-Budget Real-Valued Combinatorial Pure Exploration of Multi-Armed
Bandit [65.268245109828]
We first introduce the Combinatorial Successive Asign algorithm, which is the first algorithm that can identify the best action even when the size of the action class is exponentially large.
We show that the upper bound of the probability of error of the CSA algorithm matches a lower bound up to a logarithmic factor in the exponent.
We experimentally compare the algorithms with previous methods and show that our algorithm performs better.
arXiv Detail & Related papers (2023-10-24T09:47:32Z) - Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and
Optimal Algorithms [64.10576998630981]
We show the first tight characterization of the optimal Hessian-dependent sample complexity.
A Hessian-independent algorithm universally achieves the optimal sample complexities for all Hessian instances.
The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions.
arXiv Detail & Related papers (2023-06-21T17:03:22Z) - Provable benefits of score matching [30.317535687908755]
We give the first example of a natural exponential family of distributions such that score matching loss is computationally efficient to optimize.
We show that designing a zeroth-order or first-order oracle for optimizing the likelihood loss is NP-hard.
Minimizing the score matching loss is both computationally and statistically efficient, with complexity in the ambient dimension.
arXiv Detail & Related papers (2023-06-03T03:42:30Z) - Clustering a Mixture of Gaussians with Unknown Covariance [4.821312633849745]
We derive a Max-Cut integer program based on maximum likelihood estimation.
We develop an efficient spectral algorithm that attains the optimal rate but requires a quadratic sample size.
We generalize the Max-Cut program to a $k$-means program that handles multi-component mixtures with possibly unequal weights.
arXiv Detail & Related papers (2021-10-04T17:59:20Z) - Distributed Quantum Faithful Simulation and Function Computation Using
Algebraic Structured Measurements [8.594140167290098]
We consider the task of faithfully simulating a quantum measurement acting on a joint bipartite quantum state in a distributed manner.
The computation is performed on the fly, thus obviating the need to reconstruct individual measurement outcomes at Charlie.
arXiv Detail & Related papers (2021-01-07T04:22:15Z) - On the classical capacity of quantum Gaussian measurement [0.0]
We prove Gaussianity of the average state of the optimal ensemble in general.
We discuss the Hypothesis of Gaussian Maximizers concerning the structure of the ensemble.
arXiv Detail & Related papers (2021-01-02T11:11:22Z) - Learning Aggregation Functions [78.47770735205134]
We introduce LAF (Learning Aggregation Functions), a learnable aggregator for sets of arbitrary cardinality.
We report experiments on semi-synthetic and real data showing that LAF outperforms state-of-the-art sum- (max-) decomposition architectures.
arXiv Detail & Related papers (2020-12-15T18:28:53Z) - Robustly Learning any Clusterable Mixture of Gaussians [55.41573600814391]
We study the efficient learnability of high-dimensional Gaussian mixtures in the adversarial-robust setting.
We provide an algorithm that learns the components of an $epsilon$-corrupted $k$-mixture within information theoretically near-optimal error proofs of $tildeO(epsilon)$.
Our main technical contribution is a new robust identifiability proof clusters from a Gaussian mixture, which can be captured by the constant-degree Sum of Squares proof system.
arXiv Detail & Related papers (2020-05-13T16:44:12Z) - Best Arm Identification for Cascading Bandits in the Fixed Confidence
Setting [81.70513857417106]
We design and analyze CascadeBAI, an algorithm for finding the best set of $K$ items.
An upper bound on the time complexity of CascadeBAI is derived by overcoming a crucial analytical challenge.
We show, through the derivation of a lower bound on the time complexity, that the performance of CascadeBAI is optimal in some practical regimes.
arXiv Detail & Related papers (2020-01-23T16:47:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.