Contextual Search in the Presence of Adversarial Corruptions
- URL: http://arxiv.org/abs/2002.11650v6
- Date: Sat, 6 Aug 2022 15:05:24 GMT
- Title: Contextual Search in the Presence of Adversarial Corruptions
- Authors: Akshay Krishnamurthy, Thodoris Lykouris, Chara Podimata, and Robert
Schapire
- Abstract summary: We study contextual search, a generalization of binary search in higher dimensions.
We show that these algorithms attain near-optimal regret in the absence of adversarial corruptions.
Our techniques draw inspiration from learning theory, game theory, high-dimensional geometry, and convex analysis.
- Score: 33.28268414842846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study contextual search, a generalization of binary search in higher
dimensions, which captures settings such as feature-based dynamic pricing.
Standard formulations of this problem assume that agents act in accordance with
a specific homogeneous response model. In practice, however, some responses may
be adversarially corrupted. Existing algorithms heavily depend on the assumed
response model being (approximately) accurate for all agents and have poor
performance in the presence of even a few such arbitrary misspecifications.
We initiate the study of contextual search when some of the agents can behave
in ways inconsistent with the underlying response model. In particular, we
provide two algorithms, one based on multidimensional binary search methods and
one based on gradient descent. We show that these algorithms attain
near-optimal regret in the absence of adversarial corruptions and their
performance degrades gracefully with the number of such agents, providing the
first results for contextual search in any adversarial noise model. Our
techniques draw inspiration from learning theory, game theory, high-dimensional
geometry, and convex analysis.
Related papers
- Scalable Decentralized Algorithms for Online Personalized Mean Estimation [12.002609934938224]
This study focuses on a simplified version of the overarching problem, where each agent collects samples from a real-valued distribution over time to estimate its mean.
We introduce two collaborative mean estimation algorithms: one draws inspiration from belief propagation, while the other employs a consensus-based approach.
arXiv Detail & Related papers (2024-02-20T08:30:46Z) - The Perception-Robustness Tradeoff in Deterministic Image Restoration [34.50287066865267]
We study the behavior of deterministic methods for solving inverse problems in imaging.
To approach perfect quality and perfect consistency, the Lipschitz constant of the model must grow to infinity.
We demonstrate our theory on single image super-resolution algorithms, addressing both noisy and noiseless settings.
arXiv Detail & Related papers (2023-11-14T18:30:34Z) - PSDiff: Diffusion Model for Person Search with Iterative and
Collaborative Refinement [59.6260680005195]
We present a novel Person Search framework based on the Diffusion model, PSDiff.
PSDiff formulates the person search as a dual denoising process from noisy boxes and ReID embeddings to ground truths.
Following the new paradigm, we further design a new Collaborative Denoising Layer (CDL) to optimize detection and ReID sub-tasks in an iterative and collaborative way.
arXiv Detail & Related papers (2023-09-20T08:16:39Z) - Efficient Approximate Recovery from Pooled Data Using Doubly Regular
Pooling Schemes [1.7403133838762448]
We analyze an approximate reconstruction algorithm that estimates the hidden bits in a greedy fashion.
Our analysis is uniform in the degree of noise and the sparsity of $sigma$.
arXiv Detail & Related papers (2023-02-28T19:31:40Z) - RetrievalGuard: Provably Robust 1-Nearest Neighbor Image Retrieval [84.33752026418045]
We propose the first 1-nearest neighbor (NN) image retrieval algorithm, RetrievalGuard, which is provably robust against adversarial perturbations.
We show that the smoothed retrieval model has bounded Lipschitz constant and thus the retrieval score is invariant to $ell$ adversarial perturbations.
arXiv Detail & Related papers (2022-06-17T16:50:50Z) - Selection-Expansion: A Unifying Framework for Motion-Planning and
Diversity Search Algorithms [69.87173070473717]
We investigate the properties of two diversity search algorithms, the Novelty Search and the Goal Exploration Process algorithms.
The relation to MP algorithms reveals that the smoothness, or lack of smoothness of the mapping between the policy parameter space and the outcome space plays a key role in the search efficiency.
arXiv Detail & Related papers (2021-04-10T13:52:27Z) - Adversarial Examples for $k$-Nearest Neighbor Classifiers Based on
Higher-Order Voronoi Diagrams [69.4411417775822]
Adversarial examples are a widely studied phenomenon in machine learning models.
We propose an algorithm for evaluating the adversarial robustness of $k$-nearest neighbor classification.
arXiv Detail & Related papers (2020-11-19T08:49:10Z) - Query complexity of adversarial attacks [4.873362301533825]
Two main attack models are considered in the adversarial literature: black-box and white-box.
We consider these threat models as two ends of a fine-grained spectrum, indexed by the number of queries the adversary can ask.
We investigate how many queries the adversary needs to make to design an attack that is comparable to the best possible attack in the white-box model.
arXiv Detail & Related papers (2020-10-02T15:01:29Z) - Learning to Accelerate Heuristic Searching for Large-Scale Maximum
Weighted b-Matching Problems in Online Advertising [51.97494906131859]
Bipartite b-matching is fundamental in algorithm design, and has been widely applied into economic markets, labor markets, etc.
Existing exact and approximate algorithms usually fail in such settings due to either requiring intolerable running time or too much computation resource.
We propose textttNeuSearcher which leverages the knowledge learned from previously instances to solve new problem instances.
arXiv Detail & Related papers (2020-05-09T02:48:23Z) - Consistency of a Recurrent Language Model With Respect to Incomplete
Decoding [67.54760086239514]
We study the issue of receiving infinite-length sequences from a recurrent language model.
We propose two remedies which address inconsistency: consistent variants of top-k and nucleus sampling, and a self-terminating recurrent language model.
arXiv Detail & Related papers (2020-02-06T19:56:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.