LSDAT: Low-Rank and Sparse Decomposition for Decision-based Adversarial
Attack
- URL: http://arxiv.org/abs/2103.10787v2
- Date: Mon, 22 Mar 2021 16:07:28 GMT
- Title: LSDAT: Low-Rank and Sparse Decomposition for Decision-based Adversarial
Attack
- Authors: Ashkan Esmaeili, Marzieh Edraki, Nazanin Rahnavard, Mubarak Shah,
Ajmal Mian
- Abstract summary: LSDAT crafts perturbations in the low-dimensional subspace formed by the sparse component of the input sample and that of an adversarial sample.
LSD works directly in the image pixel domain to guarantee that non-$ell$ constraints, such as sparsity, are satisfied.
- Score: 74.5144793386864
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose LSDAT, an image-agnostic decision-based black-box attack that
exploits low-rank and sparse decomposition (LSD) to dramatically reduce the
number of queries and achieve superior fooling rates compared to the
state-of-the-art decision-based methods under given imperceptibility
constraints. LSDAT crafts perturbations in the low-dimensional subspace formed
by the sparse component of the input sample and that of an adversarial sample
to obtain query-efficiency. The specific perturbation of interest is obtained
by traversing the path between the input and adversarial sparse components. It
is set forth that the proposed sparse perturbation is the most aligned sparse
perturbation with the shortest path from the input sample to the decision
boundary for some initial adversarial sample (the best sparse approximation of
shortest path, likely to fool the model). Theoretical analyses are provided to
justify the functionality of LSDAT. Unlike other dimensionality reduction based
techniques aimed at improving query efficiency (e.g, ones based on FFT), LSD
works directly in the image pixel domain to guarantee that non-$\ell_2$
constraints, such as sparsity, are satisfied. LSD offers better control over
the number of queries and provides computational efficiency as it performs
sparse decomposition of the input and adversarial images only once to generate
all queries. We demonstrate $\ell_0$, $\ell_2$ and $\ell_\infty$ bounded
attacks with LSDAT to evince its efficiency compared to baseline decision-based
attacks in diverse low-query budget scenarios as outlined in the experiments.
Related papers
- Q-VLM: Post-training Quantization for Large Vision-Language Models [73.19871905102545]
We propose a post-training quantization framework of large vision-language models (LVLMs) for efficient multi-modal inference.
We mine the cross-layer dependency that significantly influences discretization errors of the entire vision-language model, and embed this dependency into optimal quantization strategy.
Experimental results demonstrate that our method compresses the memory by 2.78x and increase generate speed by 1.44x about 13B LLaVA model without performance degradation.
arXiv Detail & Related papers (2024-10-10T17:02:48Z) - Contextual Linear Optimization with Bandit Feedback [35.692428244561626]
Contextual linear optimization (CLO) uses predictive contextual features to reduce uncertainty in random cost coefficients.
We study a class of offline learning algorithms for CLO with bandit feedback.
We show a fast-rate regret bound for IERM that allows for misspecified model classes and flexible choices of the optimization estimate.
arXiv Detail & Related papers (2024-05-26T13:27:27Z) - Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.
To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.
Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - Cooperative Thresholded Lasso for Sparse Linear Bandit [6.52540785559241]
We present a novel approach to address the multi-agent sparse contextual linear bandit problem.
It is first algorithm that tackles row-wise distributed data in sparse linear bandits.
It is widely applicable to high-dimensional multi-agent problems where efficient feature extraction is critical for minimizing regret.
arXiv Detail & Related papers (2023-05-30T16:05:44Z) - Asymptotically Unbiased Instance-wise Regularized Partial AUC
Optimization: Theory and Algorithm [101.44676036551537]
One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC) measures the average performance of a binary classifier.
Most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable.
We present a simpler reformulation of the PAUC problem via distributional robust optimization AUC.
arXiv Detail & Related papers (2022-10-08T08:26:22Z) - Breaking the Sample Complexity Barrier to Regret-Optimal Model-Free
Reinforcement Learning [52.76230802067506]
A novel model-free algorithm is proposed to minimize regret in episodic reinforcement learning.
The proposed algorithm employs an em early-settled reference update rule, with the aid of two Q-learning sequences.
The design principle of our early-settled variance reduction method might be of independent interest to other RL settings.
arXiv Detail & Related papers (2021-10-09T21:13:48Z) - Improved Algorithms for Agnostic Pool-based Active Classification [20.12178157010804]
We consider active learning for binary classification in the agnostic pool-based setting.
Our algorithm is superior to state of the art active learning algorithms on image classification datasets.
arXiv Detail & Related papers (2021-05-13T18:24:30Z) - Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints [29.227720674726413]
We propose a fast minimum-norm (FMN) attack that works with different $ell_p$-norm perturbation models.
Experiments show that FMN significantly outperforms existing attacks in terms of convergence speed and time.
arXiv Detail & Related papers (2021-02-25T12:56:26Z) - Sparse Feature Selection Makes Batch Reinforcement Learning More Sample
Efficient [62.24615324523435]
This paper provides a statistical analysis of high-dimensional batch Reinforcement Learning (RL) using sparse linear function approximation.
When there is a large number of candidate features, our result sheds light on the fact that sparsity-aware methods can make batch RL more sample efficient.
arXiv Detail & Related papers (2020-11-08T16:48:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.