Performance Evaluation of Adversarial Attacks: Discrepancies and
Solutions
- URL: http://arxiv.org/abs/2104.11103v1
- Date: Thu, 22 Apr 2021 14:36:51 GMT
- Title: Performance Evaluation of Adversarial Attacks: Discrepancies and
Solutions
- Authors: Jing Wu, Mingyi Zhou, Ce Zhu, Yipeng Liu, Mehrtash Harandi, Li Li
- Abstract summary: adversarial attack methods have been developed to challenge the robustness of machine learning models.
We propose a Piece-wise Sampling Curving (PSC) toolkit to effectively address the discrepancy.
PSC toolkit offers options for balancing the computational cost and evaluation effectiveness.
- Score: 51.8695223602729
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, adversarial attack methods have been developed to challenge the
robustness of machine learning models. However, mainstream evaluation criteria
experience limitations, even yielding discrepancies among results under
different settings. By examining various attack algorithms, including
gradient-based and query-based attacks, we notice the lack of a consensus on a
uniform standard for unbiased performance evaluation. Accordingly, we propose a
Piece-wise Sampling Curving (PSC) toolkit to effectively address the
aforementioned discrepancy, by generating a comprehensive comparison among
adversaries in a given range. In addition, the PSC toolkit offers options for
balancing the computational cost and evaluation effectiveness. Experimental
results demonstrate our PSC toolkit presents comprehensive comparisons of
attack algorithms, significantly reducing discrepancies in practice.
Related papers
- Robust CATE Estimation Using Novel Ensemble Methods [0.8246494848934447]
estimation of Conditional Average Treatment Effects (CATE) is crucial for understanding the heterogeneity of treatment effects in clinical trials.
We evaluate the performance of common methods, including causal forests and various meta-learners, across a diverse set of scenarios.
We propose two new ensemble methods that integrate multiple estimators to enhance prediction stability and performance.
arXiv Detail & Related papers (2024-07-04T07:23:02Z) - Exploring the Performance of Continuous-Time Dynamic Link Prediction Algorithms [14.82820088479196]
Dynamic Link Prediction (DLP) addresses the prediction of future links in evolving networks.
In this work, we contribute tools to perform such a comprehensive evaluation.
We describe an exhaustive taxonomy of negative sampling methods that can be used at evaluation time.
arXiv Detail & Related papers (2024-05-27T14:03:28Z) - The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks [90.52808174102157]
In safety-critical applications such as medical imaging and autonomous driving, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks.
A notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models.
This study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks.
arXiv Detail & Related papers (2024-05-14T18:05:19Z) - Adversarial Robustness on Image Classification with $k$-means [3.5385056709199536]
We evaluate the vulnerability of $k$-means clustering algorithms to adversarial attacks, emphasising the associated security risks.
We introduce and evaluate an adversarial training method that improves testing performance in adversarial scenarios.
arXiv Detail & Related papers (2023-12-15T04:51:43Z) - Provably Efficient UCB-type Algorithms For Learning Predictive State
Representations [55.00359893021461]
The sequential decision-making problem is statistically learnable if it admits a low-rank structure modeled by predictive state representations (PSRs)
This paper proposes the first known UCB-type approach for PSRs, featuring a novel bonus term that upper bounds the total variation distance between the estimated and true models.
In contrast to existing approaches for PSRs, our UCB-type algorithms enjoy computational tractability, last-iterate guaranteed near-optimal policy, and guaranteed model accuracy.
arXiv Detail & Related papers (2023-07-01T18:35:21Z) - Adversarial Contrastive Learning by Permuting Cluster Assignments [0.8862707047517914]
We propose SwARo, an adversarial contrastive framework that incorporates cluster assignment permutations to generate representative adversarial samples.
We evaluate SwARo on multiple benchmark datasets and against various white-box and black-box attacks, obtaining consistent improvements over state-of-the-art baselines.
arXiv Detail & Related papers (2022-04-21T17:49:52Z) - Assessment of Treatment Effect Estimators for Heavy-Tailed Data [70.72363097550483]
A central obstacle in the objective assessment of treatment effect (TE) estimators in randomized control trials (RCTs) is the lack of ground truth (or validation set) to test their performance.
We provide a novel cross-validation-like methodology to address this challenge.
We evaluate our methodology across 709 RCTs implemented in the Amazon supply chain.
arXiv Detail & Related papers (2021-12-14T17:53:01Z) - Doing Great at Estimating CATE? On the Neglected Assumptions in
Benchmark Comparisons of Treatment Effect Estimators [91.3755431537592]
We show that even in arguably the simplest setting, estimation under ignorability assumptions can be misleading.
We consider two popular machine learning benchmark datasets for evaluation of heterogeneous treatment effect estimators.
We highlight that the inherent characteristics of the benchmark datasets favor some algorithms over others.
arXiv Detail & Related papers (2021-07-28T13:21:27Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z) - Machine Learning Clustering Techniques for Selective Mitigation of
Critical Design Features [0.16311150636417257]
This paper presents a new methodology which uses machine learning clustering techniques to group flip-flops with similar expected contributions to the overall functional failure rate.
Fault simulation campaigns can then be executed on a per-group basis, significantly reducing the time and cost of the evaluation.
arXiv Detail & Related papers (2020-08-31T15:03:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.