Query-Efficient Adversarial Attack Based on Latin Hypercube Sampling
- URL: http://arxiv.org/abs/2207.02391v1
- Date: Tue, 5 Jul 2022 12:04:44 GMT
- Title: Query-Efficient Adversarial Attack Based on Latin Hypercube Sampling
- Authors: Dan Wang, Jiayu Lin, and Yuan-Gen Wang
- Abstract summary: This paper proposes a Latin Hypercube Sampling based Boundary Attack (LHS-BA) to save query budget.
Experimental results demonstrate the superiority of the proposed LHS-BA over the state-of-the-art BA methods in terms of query efficiency.
- Score: 6.141497251925968
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In order to be applicable in real-world scenario, Boundary Attacks (BAs) were
proposed and ensured one hundred percent attack success rate with only decision
information. However, existing BA methods craft adversarial examples by
leveraging a simple random sampling (SRS) to estimate the gradient, consuming a
large number of model queries. To overcome the drawback of SRS, this paper
proposes a Latin Hypercube Sampling based Boundary Attack (LHS-BA) to save
query budget. Compared with SRS, LHS has better uniformity under the same
limited number of random samples. Therefore, the average on these random
samples is closer to the true gradient than that estimated by SRS. Various
experiments are conducted on benchmark datasets including MNIST, CIFAR, and
ImageNet-1K. Experimental results demonstrate the superiority of the proposed
LHS-BA over the state-of-the-art BA methods in terms of query efficiency. The
source codes are publicly available at https://github.com/GZHU-DVL/LHS-BA.
Related papers
- A Reproducible Analysis of Sequential Recommender Systems [13.987953631479662]
SequentialEnsurer Systems (SRSs) have emerged as a highly efficient approach to recommendation systems.
Existing works exhibit shortcomings in replicability of results, leading to inconsistent statements across papers.
Our work fills these gaps by standardising data pre-processing and model implementations.
arXiv Detail & Related papers (2024-08-07T16:23:29Z) - iBRF: Improved Balanced Random Forest Classifier [0.0]
Class imbalance poses a major challenge in different classification tasks.
We propose a modification to the Balanced Random Forest (BRF) classifier to enhance the prediction performance.
Our proposed hybrid sampling technique, when incorporated into the framework of the Random Forest classifier, achieves better prediction performance than other sampling techniques used in imbalanced classification tasks.
arXiv Detail & Related papers (2024-03-14T20:59:36Z) - Soft Random Sampling: A Theoretical and Empirical Analysis [59.719035355483875]
Soft random sampling (SRS) is a simple yet effective approach for efficient deep neural networks when dealing with massive data.
It selects a uniformly speed at random with replacement from each data set in each epoch.
It is shown to be a powerful and competitive strategy with significant and competitive performance on real-world industrial scale.
arXiv Detail & Related papers (2023-11-21T17:03:21Z) - Solving Diffusion ODEs with Optimal Boundary Conditions for Better Image Super-Resolution [82.50210340928173]
randomness of diffusion models results in ineffectiveness and instability, making it challenging for users to guarantee the quality of SR results.
We propose a plug-and-play sampling method that owns the potential to benefit a series of diffusion-based SR methods.
The quality of SR results sampled by the proposed method with fewer steps outperforms the quality of results sampled by current methods with randomness from the same pre-trained diffusion-based SR model.
arXiv Detail & Related papers (2023-05-24T17:09:54Z) - Risk Consistent Multi-Class Learning from Label Proportions [64.0125322353281]
This study addresses a multiclass learning from label proportions (MCLLP) setting in which training instances are provided in bags.
Most existing MCLLP methods impose bag-wise constraints on the prediction of instances or assign them pseudo-labels.
A risk-consistent method is proposed for instance classification using the empirical risk minimization framework.
arXiv Detail & Related papers (2022-03-24T03:49:04Z) - Markov subsampling based Huber Criterion [13.04847430878172]
Subsampling is an important technique to tackle the computational challenges brought by big data.
We design a new Markov subsampling strategy based on Huber criterion (HMS) to construct an informative subset from the noisy full data.
HMS is built upon a Metropolis-Hasting procedure, where the inclusion probability of each sampling unit is determined.
Under mild conditions, we show that the estimator based on the subsamples selected by HMS is statistically consistent with a sub-Gaussian deviation bound.
arXiv Detail & Related papers (2021-12-12T03:11:23Z) - Rethinking Sampling Strategies for Unsupervised Person Re-identification [59.47536050785886]
We analyze the reasons for the performance differences between various sampling strategies under the same framework and loss function.
Group sampling is proposed, which gathers samples from the same class into groups.
Experiments on Market-1501, DukeMTMC-reID and MSMT17 show that group sampling achieves performance comparable to state-of-the-art methods.
arXiv Detail & Related papers (2021-07-07T05:39:58Z) - LSDAT: Low-Rank and Sparse Decomposition for Decision-based Adversarial
Attack [74.5144793386864]
LSDAT crafts perturbations in the low-dimensional subspace formed by the sparse component of the input sample and that of an adversarial sample.
LSD works directly in the image pixel domain to guarantee that non-$ell$ constraints, such as sparsity, are satisfied.
arXiv Detail & Related papers (2021-03-19T13:10:47Z) - Improved, Deterministic Smoothing for L1 Certified Robustness [119.86676998327864]
We propose a non-additive and deterministic smoothing method, Deterministic Smoothing with Splitting Noise (DSSN)
In contrast to uniform additive smoothing, the SSN certification does not require the random noise components used to be independent.
This is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model.
arXiv Detail & Related papers (2021-03-17T21:49:53Z) - Sequential Density Ratio Estimation for Simultaneous Optimization of
Speed and Accuracy [11.470070927586017]
We propose the SPRT-TANDEM, a deep neural network-based SPRT algorithm that overcomes the above two obstacles.
In tests on one original and two public video databases, the SPRT-TANDEM achieves statistically significantly better classification accuracy than other baselines.
arXiv Detail & Related papers (2020-06-10T01:05:00Z) - Thompson Sampling Algorithms for Mean-Variance Bandits [97.43678751629189]
We develop Thompson Sampling-style algorithms for mean-variance MAB.
We also provide comprehensive regret analyses for Gaussian and Bernoulli bandits.
Our algorithms significantly outperform existing LCB-based algorithms for all risk tolerances.
arXiv Detail & Related papers (2020-02-01T15:33:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.