Robust recovery for stochastic block models
- URL: http://arxiv.org/abs/2111.08568v1
- Date: Tue, 16 Nov 2021 15:43:00 GMT
- Title: Robust recovery for stochastic block models
- Authors: Jingqiu Ding, Tommaso d'Orsi, Rajai Nasser, David Steurer
- Abstract summary: We develop an efficient algorithm for weak recovery in a robust version of the block model.
Our results show that there is no price of robustness in the block model.
- Score: 16.74630355427558
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop an efficient algorithm for weak recovery in a robust version of
the stochastic block model. The algorithm matches the statistical guarantees of
the best known algorithms for the vanilla version of the stochastic block
model. In this sense, our results show that there is no price of robustness in
the stochastic block model. Our work is heavily inspired by recent work of
Banks, Mohanty, and Raghavendra (SODA 2021) that provided an efficient
algorithm for the corresponding distinguishing problem. Our algorithm and its
analysis significantly depart from previous ones for robust recovery. A key
challenge is the peculiar optimization landscape underlying our algorithm: The
planted partition may be far from optimal in the sense that completely
unrelated solutions could achieve the same objective value. This phenomenon is
related to the push-out effect at the BBP phase transition for PCA. To the best
of our knowledge, our algorithm is the first to achieve robust recovery in the
presence of such a push-out effect in a non-asymptotic setting. Our algorithm
is an instantiation of a framework based on convex optimization (related to but
distinct from sum-of-squares), which may be useful for other robust matrix
estimation problems. A by-product of our analysis is a general technique that
boosts the probability of success (over the randomness of the input) of an
arbitrary robust weak-recovery algorithm from constant (or slowly vanishing)
probability to exponentially high probability.
Related papers
- Rigorous Probabilistic Guarantees for Robust Counterfactual Explanations [80.86128012438834]
We show for the first time that computing the robustness of counterfactuals with respect to plausible model shifts is NP-complete.
We propose a novel probabilistic approach which is able to provide tight estimates of robustness with strong guarantees.
arXiv Detail & Related papers (2024-07-10T09:13:11Z) - Semi-Bandit Learning for Monotone Stochastic Optimization [20.776114616154242]
We provide a generic online learning algorithm for a class of "monotone" problems.
Our framework applies to several fundamental problems in optimization such as prophet, Pandora's box knapsack, inequality matchings and submodular optimization.
arXiv Detail & Related papers (2023-12-24T07:46:37Z) - A Novel Framework for Improving the Breakdown Point of Robust Regression
Algorithms [1.9594639581421422]
We present an effective framework for improving the breakdown point of robust regression algorithms.
We derive a consistent robust regression algorithm with iterative local search (CORALS)
arXiv Detail & Related papers (2023-05-20T15:59:33Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Towards Feature-Based Performance Regression Using Trajectory Data [0.9281671380673306]
Black-box optimization is a very active area of research, with many new algorithms being developed every year.
The variety of algorithms poses a meta-problem: which algorithm to choose for a given problem at hand?
Past research has shown that per-instance algorithm selection based on exploratory landscape analysis can be an efficient mean to tackle this meta-problem.
arXiv Detail & Related papers (2021-02-10T10:19:13Z) - Byzantine-Resilient Non-Convex Stochastic Gradient Descent [61.6382287971982]
adversary-resilient distributed optimization, in which.
machines can independently compute gradients, and cooperate.
Our algorithm is based on a new concentration technique, and its sample complexity.
It is very practical: it improves upon the performance of all prior methods when no.
setting machines are present.
arXiv Detail & Related papers (2020-12-28T17:19:32Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Kidney Exchange with Inhomogeneous Edge Existence Uncertainty [33.17472228570093]
We aim to maximize a matched cycle and chain packing problem, where we aim to identify structures in a directed graph to the edge of failure.
Our approaches on data from the United for Sharing (SUNO) provides better performance with the same weights as as an SAA-based method.
arXiv Detail & Related papers (2020-07-07T04:08:39Z) - Active Model Estimation in Markov Decision Processes [108.46146218973189]
We study the problem of efficient exploration in order to learn an accurate model of an environment, modeled as a Markov decision process (MDP)
We show that our Markov-based algorithm outperforms both our original algorithm and the maximum entropy algorithm in the small sample regime.
arXiv Detail & Related papers (2020-03-06T16:17:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.