Improving Fast Minimum-Norm Attacks with Hyperparameter Optimization
- URL: http://arxiv.org/abs/2310.08177v1
- Date: Thu, 12 Oct 2023 10:03:25 GMT
- Title: Improving Fast Minimum-Norm Attacks with Hyperparameter Optimization
- Authors: Giuseppe Floris, Raffaele Mura, Luca Scionis, Giorgio Piras, Maura
Pintor, Ambra Demontis, Battista Biggio
- Abstract summary: We show that hyperparameter optimization can improve fast minimum-norm attacks by the selection of the loss function, the automation and the step-size scheduler.
We release our open-source code at https://www.pralab.com/HO-FMN.
- Score: 12.526318578195724
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Evaluating the adversarial robustness of machine learning models using
gradient-based attacks is challenging. In this work, we show that
hyperparameter optimization can improve fast minimum-norm attacks by automating
the selection of the loss function, the optimizer and the step-size scheduler,
along with the corresponding hyperparameters. Our extensive evaluation
involving several robust models demonstrates the improved efficacy of fast
minimum-norm attacks when hyper-up with hyperparameter optimization. We release
our open-source code at https://github.com/pralab/HO-FMN.
Related papers
- LoRTA: Low Rank Tensor Adaptation of Large Language Models [70.32218116940393]
Low Rank Adaptation (LoRA) is a popular Efficient Fine Tuning (PEFT) method that effectively adapts large pre-trained models for downstream tasks.
We propose a novel approach that employs a low rank tensor parametrization for model updates.
Our method is both efficient and effective for fine-tuning large language models, achieving a substantial reduction in the number of parameters while maintaining comparable performance.
arXiv Detail & Related papers (2024-10-05T06:59:50Z) - HO-FMN: Hyperparameter Optimization for Fast Minimum-Norm Attacks [14.626176607206748]
We propose a parametric variation of the well-known fast minimum-norm attack algorithm.
We re-evaluate 12 robust models, showing that our attack finds smaller adversarial perturbations without requiring any additional tuning.
arXiv Detail & Related papers (2024-07-11T18:30:01Z) - ETHER: Efficient Finetuning of Large-Scale Models with Hyperplane Reflections [59.839926875976225]
We propose the ETHER transformation family, which performs Efficient fineTuning via HypErplane Reflections.
In particular, we introduce ETHER and its relaxation ETHER+, which match or outperform existing PEFT methods with significantly fewer parameters.
arXiv Detail & Related papers (2024-05-30T17:26:02Z) - Parameter Optimization with Conscious Allocation (POCA) [4.478575931884855]
Hyperband-based approaches to machine learning are among the most effective.
We present.
the new.
Optimization with Conscious Allocation (POCA), a hyperband-based algorithm that adaptively allocates the inputted.
budget to the hyperparameter configurations it generates.
POCA finds strong configurations faster in both settings.
arXiv Detail & Related papers (2023-12-29T00:13:55Z) - AdaLomo: Low-memory Optimization with Adaptive Learning Rate [59.64965955386855]
We introduce low-memory optimization with adaptive learning rate (AdaLomo) for large language models.
AdaLomo results on par with AdamW, while significantly reducing memory requirements, thereby lowering the hardware barrier to training large language models.
arXiv Detail & Related papers (2023-10-16T09:04:28Z) - AUTOMATA: Gradient Based Data Subset Selection for Compute-Efficient
Hyper-parameter Tuning [72.54359545547904]
We propose a gradient-based subset selection framework for hyper- parameter tuning.
We show that using gradient-based data subsets for hyper- parameter tuning achieves significantly faster turnaround times and speedups of 3$times$-30$times$.
arXiv Detail & Related papers (2022-03-15T19:25:01Z) - Hyper-parameter optimization based on soft actor critic and hierarchical
mixture regularization [5.063728016437489]
We model hyper- parameter optimization process as a Markov decision process, and tackle it with reinforcement learning.
A novel hyper- parameter optimization method based on soft actor critic and hierarchical mixture regularization has been proposed.
arXiv Detail & Related papers (2021-12-08T02:34:43Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z) - Online hyperparameter optimization by real-time recurrent learning [57.01871583756586]
Our framework takes advantage of the analogy between hyperparameter optimization and parameter learning in neural networks (RNNs)
It adapts a well-studied family of online learning algorithms for RNNs to tune hyperparameters and network parameters simultaneously.
This procedure yields systematically better generalization performance compared to standard methods, at a fraction of wallclock time.
arXiv Detail & Related papers (2021-02-15T19:36:18Z) - Automatic Hyper-Parameter Optimization Based on Mapping Discovery from
Data to Hyper-Parameters [3.37314595161109]
We propose an efficient automatic parameter optimization approach, which is based on the mapping from data to the corresponding hyper- parameters.
We show that the proposed approaches outperform the state-of-the-art apporaches significantly.
arXiv Detail & Related papers (2020-03-03T19:26:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.