A Tale of HodgeRank and Spectral Method: Target Attack Against Rank
Aggregation Is the Fixed Point of Adversarial Game
- URL: http://arxiv.org/abs/2209.05742v1
- Date: Tue, 13 Sep 2022 05:59:02 GMT
- Title: A Tale of HodgeRank and Spectral Method: Target Attack Against Rank
Aggregation Is the Fixed Point of Adversarial Game
- Authors: Ke Ma and Qianqian Xu and Jinshan Zeng and Guorong Li and Xiaochun Cao
and Qingming Huang
- Abstract summary: The intrinsic vulnerability of the rank aggregation methods is not well studied in the literature.
In this paper, we focus on the purposeful adversary who desires to designate the aggregated results by modifying the pairwise data.
The effectiveness of the suggested target attack strategies is demonstrated by a series of toy simulations and several real-world data experiments.
- Score: 153.74942025516853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rank aggregation with pairwise comparisons has shown promising results in
elections, sports competitions, recommendations, and information retrieval.
However, little attention has been paid to the security issue of such
algorithms, in contrast to numerous research work on the computational and
statistical characteristics. Driven by huge profits, the potential adversary
has strong motivation and incentives to manipulate the ranking list. Meanwhile,
the intrinsic vulnerability of the rank aggregation methods is not well studied
in the literature. To fully understand the possible risks, we focus on the
purposeful adversary who desires to designate the aggregated results by
modifying the pairwise data in this paper. From the perspective of the
dynamical system, the attack behavior with a target ranking list is a fixed
point belonging to the composition of the adversary and the victim. To perform
the targeted attack, we formulate the interaction between the adversary and the
victim as a game-theoretic framework consisting of two continuous operators
while Nash equilibrium is established. Then two procedures against HodgeRank
and RankCentrality are constructed to produce the modification of the original
data. Furthermore, we prove that the victims will produce the target ranking
list once the adversary masters the complete information. It is noteworthy that
the proposed methods allow the adversary only to hold incomplete information or
imperfect feedback and perform the purposeful attack. The effectiveness of the
suggested target attack strategies is demonstrated by a series of toy
simulations and several real-world data experiments. These experimental results
show that the proposed methods could achieve the attacker's goal in the sense
that the leading candidate of the perturbed ranking list is the designated one
by the adversary.
Related papers
- Preference Poisoning Attacks on Reward Model Learning [47.00395978031771]
We investigate the nature and extent of a vulnerability in learning reward models from pairwise comparisons.
We propose two classes of algorithmic approaches for these attacks: a gradient-based framework, and several variants of rank-by-distance methods.
We find that the best attacks are often highly successful, achieving in the most extreme case 100% success rate with only 0.3% of the data poisoned.
arXiv Detail & Related papers (2024-02-02T21:45:24Z) - Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning
Few-Shot Meta-Learners [28.468089304148453]
We attack amortized meta-learners, which allows us to craft colluding sets of inputs that fool the system's learning algorithm.
We show that in a white box setting, these attacks are very successful and can cause the target model's predictions to become worse than chance.
We explore two hypotheses to explain this: 'overfitting' by the attack, and mismatch between the model on which the attack is generated and that to which the attack is transferred.
arXiv Detail & Related papers (2022-11-23T14:55:44Z) - Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack [53.032801921915436]
Human Activity Recognition (HAR) has been employed in a wide range of applications, e.g. self-driving cars.
Recently, the robustness of skeleton-based HAR methods have been questioned due to their vulnerability to adversarial attacks.
We show such threats exist, even when the attacker only has access to the input/output of the model.
We propose the very first black-box adversarial attack approach in skeleton-based HAR called BASAR.
arXiv Detail & Related papers (2022-11-21T09:51:28Z) - Order-Disorder: Imitation Adversarial Attacks for Black-box Neural
Ranking Models [48.93128542994217]
We propose an imitation adversarial attack on black-box neural passage ranking models.
We show that the target passage ranking model can be transparentized and imitated by enumerating critical queries/candidates.
We also propose an innovative gradient-based attack method, empowered by the pairwise objective function, to generate adversarial triggers.
arXiv Detail & Related papers (2022-09-14T09:10:07Z) - Poisoning Attack against Estimating from Pairwise Comparisons [140.9033911097995]
Attackers have strong motivation and incentives to manipulate the ranking list.
Data poisoning attacks on pairwise ranking algorithms can be formalized as the dynamic and static games between the ranker and the attacker.
We propose two efficient poisoning attack algorithms and establish the associated theoretical guarantees.
arXiv Detail & Related papers (2021-07-05T08:16:01Z) - Adversarial Attack and Defense in Deep Ranking [100.17641539999055]
We propose two attacks against deep ranking systems that can raise or lower the rank of chosen candidates by adversarial perturbations.
Conversely, an anti-collapse triplet defense is proposed to improve the ranking model robustness against all proposed attacks.
Our adversarial ranking attacks and defenses are evaluated on MNIST, Fashion-MNIST, CUB200-2011, CARS196 and Stanford Online Products datasets.
arXiv Detail & Related papers (2021-06-07T13:41:45Z) - Practical Relative Order Attack in Deep Ranking [99.332629807873]
We formulate a new adversarial attack against deep ranking systems, i.e., the Order Attack.
The Order Attack covertly alters the relative order among a selected set of candidates according to an attacker-specified permutation.
It is successfully implemented on a major e-commerce platform.
arXiv Detail & Related papers (2021-03-09T06:41:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.