Relaxed Models for Adversarial Streaming: The Advice Model and the
Bounded Interruptions Model
- URL: http://arxiv.org/abs/2301.09203v1
- Date: Sun, 22 Jan 2023 21:13:13 GMT
- Title: Relaxed Models for Adversarial Streaming: The Advice Model and the
Bounded Interruptions Model
- Authors: Menachem Sadigurschi, Moshe Shechner, Uri Stemmer
- Abstract summary: An adversarial streaming algorithm must maintain utility even when the input stream is chosen adaptively and adversarially.
We present two models that allow us to interpolate between the oblivious and the adversarial models.
This allows us to design robust algorithms with significantly improved space complexity.
- Score: 14.204551125591022
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Streaming algorithms are typically analyzed in the oblivious setting, where
we assume that the input stream is fixed in advance. Recently, there is a
growing interest in designing adversarially robust streaming algorithms that
must maintain utility even when the input stream is chosen adaptively and
adversarially as the execution progresses. While several fascinating results
are known for the adversarial setting, in general, it comes at a very high cost
in terms of the required space. Motivated by this, in this work we set out to
explore intermediate models that allow us to interpolate between the oblivious
and the adversarial models. Specifically, we put forward the following two
models:
(1) *The advice model*, in which the streaming algorithm may occasionally ask
for one bit of advice.
(2) *The bounded interruptions model*, in which we assume that the adversary
is only partially adaptive.
We present both positive and negative results for each of these two models.
In particular, we present generic reductions from each of these models to the
oblivious model. This allows us to design robust algorithms with significantly
improved space complexity compared to what is known in the plain adversarial
model.
Related papers
- Model Ensembling for Constrained Optimization [7.4351710906830375]
We consider a setting in which we wish to ensemble models for multidimensional output predictions that are in turn used for downstream optimization.
More precisely, we imagine we are given a number of models mapping a state space to multidimensional real-valued predictions.
These predictions form the coefficients of a linear objective that we would like to optimize under specified constraints.
We apply multicalibration techniques that lead to two provably efficient and convergent algorithms.
arXiv Detail & Related papers (2024-05-27T01:48:07Z) - Precision-Recall Divergence Optimization for Generative Modeling with
GANs and Normalizing Flows [54.050498411883495]
We develop a novel training method for generative models, such as Generative Adversarial Networks and Normalizing Flows.
We show that achieving a specified precision-recall trade-off corresponds to minimizing a unique $f$-divergence from a family we call the textitPR-divergences.
Our approach improves the performance of existing state-of-the-art models like BigGAN in terms of either precision or recall when tested on datasets such as ImageNet.
arXiv Detail & Related papers (2023-05-30T10:07:17Z) - Generalized Relation Modeling for Transformer Tracking [13.837171342738355]
One-stream trackers let the template interact with all parts inside the search region throughout all the encoder layers.
This could potentially lead to target-background confusion when the extracted feature representations are not sufficiently discriminative.
We propose a generalized relation modeling method based on adaptive token division.
Our method is superior to the two-stream and one-stream pipelines and achieves state-of-the-art performance on six challenging benchmarks with a real-time running speed.
arXiv Detail & Related papers (2023-03-29T10:29:25Z) - Probabilistic Modeling for Human Mesh Recovery [73.11532990173441]
This paper focuses on the problem of 3D human reconstruction from 2D evidence.
We recast the problem as learning a mapping from the input to a distribution of plausible 3D poses.
arXiv Detail & Related papers (2021-08-26T17:55:11Z) - Network Estimation by Mixing: Adaptivity and More [2.3478438171452014]
We propose a mixing strategy that leverages available arbitrary models to improve their individual performances.
The proposed method is computationally efficient and almost tuning-free.
We show that the proposed method performs equally well as the oracle estimate when the true model is included as individual candidates.
arXiv Detail & Related papers (2021-06-05T05:17:04Z) - BODAME: Bilevel Optimization for Defense Against Model Extraction [10.877450596327407]
We consider an adversarial setting to prevent model extraction under the assumption that will make best guess on the service provider's attacker.
We formulate a surrogate model using the predictions of the true model.
We give a tractable transformation and an algorithm for more complicated models that are learned by using gradient descent-based algorithms.
arXiv Detail & Related papers (2021-03-11T17:08:31Z) - Outlier-Robust Learning of Ising Models Under Dobrushin's Condition [57.89518300699042]
We study the problem of learning Ising models satisfying Dobrushin's condition in the outlier-robust setting where a constant fraction of the samples are adversarially corrupted.
Our main result is to provide the first computationally efficient robust learning algorithm for this problem with near-optimal error guarantees.
arXiv Detail & Related papers (2021-02-03T18:00:57Z) - Understanding Neural Abstractive Summarization Models via Uncertainty [54.37665950633147]
seq2seq abstractive summarization models generate text in a free-form manner.
We study the entropy, or uncertainty, of the model's token-level predictions.
We show that uncertainty is a useful perspective for analyzing summarization and text generation models more broadly.
arXiv Detail & Related papers (2020-10-15T16:57:27Z) - Affine-Invariant Robust Training [0.0]
This project reviews work in spatial robustness methods and proposes zeroth order optimization algorithms to find the worst affine transforms for each input.
The proposed method effectively yields robust models and allows introducing non-parametric adversarial perturbations.
arXiv Detail & Related papers (2020-10-08T18:59:19Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Learning Causal Models Online [103.87959747047158]
Predictive models can rely on spurious correlations in the data for making predictions.
One solution for achieving strong generalization is to incorporate causal structures in the models.
We propose an online algorithm that continually detects and removes spurious features.
arXiv Detail & Related papers (2020-06-12T20:49:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.