Adversarial Attacks on Online Learning to Rank with Stochastic Click
Models
- URL: http://arxiv.org/abs/2305.19218v1
- Date: Tue, 30 May 2023 17:05:49 GMT
- Title: Adversarial Attacks on Online Learning to Rank with Stochastic Click
Models
- Authors: Zichen Wang, Rishab Balasubramanian, Hui Yuan, Chenyu Song, Mengdi
Wang, Huazheng Wang
- Abstract summary: We propose the first study of adversarial attacks on online learning to rank.
The goal of the adversary is to misguide the online learning to rank algorithm to place the target item on top of the ranking list linear times to time horizon $T$ with a sublinear attack cost.
- Score: 34.725468803108754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose the first study of adversarial attacks on online learning to rank.
The goal of the adversary is to misguide the online learning to rank algorithm
to place the target item on top of the ranking list linear times to time
horizon $T$ with a sublinear attack cost. We propose generalized list poisoning
attacks that perturb the ranking list presented to the user. This strategy can
efficiently attack any no-regret ranker in general stochastic click models.
Furthermore, we propose a click poisoning-based strategy named attack-then-quit
that can efficiently attack two representative OLTR algorithms for stochastic
click models. We theoretically analyze the success and cost upper bound of the
two proposed methods. Experimental results based on synthetic and real-world
data further validate the effectiveness and cost-efficiency of the proposed
attack strategies.
Related papers
- Adversarial Attacks on Online Learning to Rank with Click Feedback [18.614785011987756]
Online learning to rank is a sequential decision-making problem where a learning agent selects an ordered list of items and receives feedback through user clicks.
This paper studies attack strategies against multiple variants of OLTR.
We propose a general attack strategy against any algorithm under the general click model.
arXiv Detail & Related papers (2023-05-26T16:28:26Z) - Order-Disorder: Imitation Adversarial Attacks for Black-box Neural
Ranking Models [48.93128542994217]
We propose an imitation adversarial attack on black-box neural passage ranking models.
We show that the target passage ranking model can be transparentized and imitated by enumerating critical queries/candidates.
We also propose an innovative gradient-based attack method, empowered by the pairwise objective function, to generate adversarial triggers.
arXiv Detail & Related papers (2022-09-14T09:10:07Z) - A Tale of HodgeRank and Spectral Method: Target Attack Against Rank
Aggregation Is the Fixed Point of Adversarial Game [153.74942025516853]
The intrinsic vulnerability of the rank aggregation methods is not well studied in the literature.
In this paper, we focus on the purposeful adversary who desires to designate the aggregated results by modifying the pairwise data.
The effectiveness of the suggested target attack strategies is demonstrated by a series of toy simulations and several real-world data experiments.
arXiv Detail & Related papers (2022-09-13T05:59:02Z) - Projective Ranking-based GNN Evasion Attacks [52.85890533994233]
Graph neural networks (GNNs) offer promising learning methods for graph-related tasks.
GNNs are at risk of adversarial attacks.
arXiv Detail & Related papers (2022-02-25T21:52:09Z) - Poisoning Attack against Estimating from Pairwise Comparisons [140.9033911097995]
Attackers have strong motivation and incentives to manipulate the ranking list.
Data poisoning attacks on pairwise ranking algorithms can be formalized as the dynamic and static games between the ranker and the attacker.
We propose two efficient poisoning attack algorithms and establish the associated theoretical guarantees.
arXiv Detail & Related papers (2021-07-05T08:16:01Z) - Adversarial Attack and Defense in Deep Ranking [100.17641539999055]
We propose two attacks against deep ranking systems that can raise or lower the rank of chosen candidates by adversarial perturbations.
Conversely, an anti-collapse triplet defense is proposed to improve the ranking model robustness against all proposed attacks.
Our adversarial ranking attacks and defenses are evaluated on MNIST, Fashion-MNIST, CUB200-2011, CARS196 and Stanford Online Products datasets.
arXiv Detail & Related papers (2021-06-07T13:41:45Z) - Online Adversarial Attacks [57.448101834579624]
We formalize the online adversarial attack problem, emphasizing two key elements found in real-world use-cases.
We first rigorously analyze a deterministic variant of the online threat model.
We then propose algoname, a simple yet practical algorithm yielding a provably better competitive ratio for $k=2$ over the current best single threshold algorithm.
arXiv Detail & Related papers (2021-03-02T20:36:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.