Automated Robustness with Adversarial Training as a Post-Processing Step
- URL: http://arxiv.org/abs/2109.02532v1
- Date: Mon, 6 Sep 2021 15:17:08 GMT
- Title: Automated Robustness with Adversarial Training as a Post-Processing Step
- Authors: Ambrish Rawat, Mathieu Sinn, Beat Buesser
- Abstract summary: This work explores the efficacy of a simple post processing step in yielding robust deep learning model.
We adopt adversarial training as a post-processing step for optimised network architectures obtained from a neural architecture search algorithm.
- Score: 5.55549775099824
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial training is a computationally expensive task and hence searching
for neural network architectures with robustness as the criterion can be
challenging. As a step towards practical automation, this work explores the
efficacy of a simple post processing step in yielding robust deep learning
model. To achieve this, we adopt adversarial training as a post-processing step
for optimised network architectures obtained from a neural architecture search
algorithm. Specific policies are adopted for tuning the hyperparameters of the
different steps, resulting in a fully automated pipeline for generating
adversarially robust deep learning models. We evidence the usefulness of the
proposed pipeline with extensive experimentation across 11 image classification
and 9 text classification tasks.
Related papers
- A Survey on Neural Architecture Search Based on Reinforcement Learning [0.0]
This paper introduces the overall development of Neural Architecture Search.
We then focus mainly on providing an overall and understandable survey about Neural Architecture Search works.
arXiv Detail & Related papers (2024-09-26T17:28:10Z) - Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch [72.26822499434446]
Auto-Train-Once (ATO) is an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs.
We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures.
arXiv Detail & Related papers (2024-03-21T02:33:37Z) - FootstepNet: an Efficient Actor-Critic Method for Fast On-line Bipedal Footstep Planning and Forecasting [0.0]
We propose an efficient footstep planning method to navigate in local environments with obstacles.
We also propose a forecasting method, allowing to quickly estimate the number of footsteps required to reach different candidates of local targets.
We demonstrate the validity of our approach with simulation results, and by a deployment on a kid-size humanoid robot during the RoboCup 2023 competition.
arXiv Detail & Related papers (2024-03-19T09:48:18Z) - Pruning-as-Search: Efficient Neural Architecture Search via Channel
Pruning and Structural Reparameterization [50.50023451369742]
Pruning-as-Search (PaS) is an end-to-end channel pruning method to search out desired sub-network automatically and efficiently.
Our proposed architecture outperforms prior arts by around $1.0%$ top-1 accuracy on ImageNet-1000 classification task.
arXiv Detail & Related papers (2022-06-02T17:58:54Z) - Automatic Block-wise Pruning with Auxiliary Gating Structures for Deep
Convolutional Neural Networks [9.293334856614628]
This paper presents a novel structured network pruning method with auxiliary gating structures.
Our experiments demonstrate that our method can achieve state-of-the-arts compression performance for the classification tasks.
arXiv Detail & Related papers (2022-05-07T09:03:32Z) - Divide & Conquer Imitation Learning [75.31752559017978]
Imitation Learning can be a powerful approach to bootstrap the learning process.
We present a novel algorithm designed to imitate complex robotic tasks from the states of an expert trajectory.
We show that our method imitates a non-holonomic navigation task and scales to a complex simulated robotic manipulation task with very high sample efficiency.
arXiv Detail & Related papers (2022-04-15T09:56:50Z) - CHASE: Robust Visual Tracking via Cell-Level Differentiable Neural
Architecture Search [14.702573109803307]
We propose a novel cell-level differentiable architecture search mechanism to automate the network design of the tracking module.
The proposed approach is simple, efficient, and with no need to stack a series of modules to construct a network.
Our approach is easy to be incorporated into existing trackers, which is empirically validated using different differentiable architecture search-based methods and tracking objectives.
arXiv Detail & Related papers (2021-07-02T15:16:45Z) - Automated Evolutionary Approach for the Design of Composite Machine
Learning Pipelines [48.7576911714538]
The proposed approach is aimed to automate the design of composite machine learning pipelines.
It designs the pipelines with a customizable graph-based structure, analyzes the obtained results, and reproduces them.
The software implementation on this approach is presented as an open-source framework.
arXiv Detail & Related papers (2021-06-26T23:19:06Z) - Learning to Stop While Learning to Predict [85.7136203122784]
Many algorithm-inspired deep models are restricted to a fixed-depth'' for all inputs.
Similar to algorithms, the optimal depth of a deep architecture may be different for different input instances.
In this paper, we tackle this varying depth problem using a steerable architecture.
We show that the learned deep model along with the stopping policy improves the performances on a diverse set of tasks.
arXiv Detail & Related papers (2020-06-09T07:22:01Z) - Subset Sampling For Progressive Neural Network Learning [106.12874293597754]
Progressive Neural Network Learning is a class of algorithms that incrementally construct the network's topology and optimize its parameters based on the training data.
We propose to speed up this process by exploiting subsets of training data at each incremental training step.
Experimental results in object, scene and face recognition problems demonstrate that the proposed approach speeds up the optimization procedure considerably.
arXiv Detail & Related papers (2020-02-17T18:57:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.