Adversarial Attacks on Online Learning to Rank with Click Feedback
- URL: http://arxiv.org/abs/2305.17071v1
- Date: Fri, 26 May 2023 16:28:26 GMT
- Title: Adversarial Attacks on Online Learning to Rank with Click Feedback
- Authors: Jinhang Zuo, Zhiyao Zhang, Zhiyong Wang, Shuai Li, Mohammad
Hajiesmaili, Adam Wierman
- Abstract summary: Online learning to rank is a sequential decision-making problem where a learning agent selects an ordered list of items and receives feedback through user clicks.
This paper studies attack strategies against multiple variants of OLTR.
We propose a general attack strategy against any algorithm under the general click model.
- Score: 18.614785011987756
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online learning to rank (OLTR) is a sequential decision-making problem where
a learning agent selects an ordered list of items and receives feedback through
user clicks. Although potential attacks against OLTR algorithms may cause
serious losses in real-world applications, little is known about adversarial
attacks on OLTR. This paper studies attack strategies against multiple variants
of OLTR. Our first result provides an attack strategy against the UCB algorithm
on classical stochastic bandits with binary feedback, which solves the key
issues caused by bounded and discrete feedback that previous works can not
handle. Building on this result, we design attack algorithms against UCB-based
OLTR algorithms in position-based and cascade models. Finally, we propose a
general attack strategy against any algorithm under the general click model.
Each attack algorithm manipulates the learning agent into choosing the target
attack item $T-o(T)$ times, incurring a cumulative cost of $o(T)$. Experiments
on synthetic and real data further validate the effectiveness of our proposed
attack algorithms.
Related papers
- Universal Black-Box Reward Poisoning Attack against Offline Reinforcement Learning [4.629358641630161]
We study the problem of universal black-boxed reward poisoning attacks against general offline reinforcement learning with deep neural networks.
We propose the first universal black-box reward poisoning attack in the general offline RL setting.
arXiv Detail & Related papers (2024-02-15T04:08:49Z) - Adversarial Attacks on Online Learning to Rank with Stochastic Click
Models [34.725468803108754]
We propose the first study of adversarial attacks on online learning to rank.
The goal of the adversary is to misguide the online learning to rank algorithm to place the target item on top of the ranking list linear times to time horizon $T$ with a sublinear attack cost.
arXiv Detail & Related papers (2023-05-30T17:05:49Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Projective Ranking-based GNN Evasion Attacks [52.85890533994233]
Graph neural networks (GNNs) offer promising learning methods for graph-related tasks.
GNNs are at risk of adversarial attacks.
arXiv Detail & Related papers (2022-02-25T21:52:09Z) - Adversarial Attacks on Gaussian Process Bandits [47.84198626686564]
We propose various adversarial attack methods with differing assumptions on the attacker's strength and prior information.
Our goal is to understand adversarial attacks on GP bandits from both a theoretical and practical perspective.
We demonstrate that adversarial attacks on GP bandits can succeed in forcing the algorithm towards $mathcalR_rm target$ even with a low attack budget.
arXiv Detail & Related papers (2021-10-16T02:39:10Z) - Poisoning Attack against Estimating from Pairwise Comparisons [140.9033911097995]
Attackers have strong motivation and incentives to manipulate the ranking list.
Data poisoning attacks on pairwise ranking algorithms can be formalized as the dynamic and static games between the ranker and the attacker.
We propose two efficient poisoning attack algorithms and establish the associated theoretical guarantees.
arXiv Detail & Related papers (2021-07-05T08:16:01Z) - Online Adversarial Attacks [57.448101834579624]
We formalize the online adversarial attack problem, emphasizing two key elements found in real-world use-cases.
We first rigorously analyze a deterministic variant of the online threat model.
We then propose algoname, a simple yet practical algorithm yielding a provably better competitive ratio for $k=2$ over the current best single threshold algorithm.
arXiv Detail & Related papers (2021-03-02T20:36:04Z) - Composite Adversarial Attacks [57.293211764569996]
Adversarial attack is a technique for deceiving Machine Learning (ML) models.
In this paper, a new procedure called Composite Adrial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms.
CAA beats 10 top attackers on 11 diverse defenses with less elapsed time.
arXiv Detail & Related papers (2020-12-10T03:21:16Z) - Action-Manipulation Attacks Against Stochastic Bandits: Attacks and
Defense [45.408568528354216]
We introduce a new class of attack named action-manipulation attack.
In this attack, an adversary can change the action signal selected by the user.
To defend against this class of attacks, we introduce a novel algorithm that is robust to action-manipulation attacks.
arXiv Detail & Related papers (2020-02-19T04:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.