Robust Learning via Persistency of Excitation
- URL: http://arxiv.org/abs/2106.02078v2
- Date: Mon, 7 Jun 2021 16:12:55 GMT
- Title: Robust Learning via Persistency of Excitation
- Authors: Kaustubh Sridhar, Oleg Sokolsky, Insup Lee, James Weimer
- Abstract summary: We show that network training using gradient descent is equivalent to a dynamical system parameter estimation problem.
We provide an efficient technique for estimating the corresponding Lipschitz constant using extreme value theory.
Our approach also universally increases the adversarial accuracy by 0.1% to 0.3% points in various state-of-the-art adversarially trained models.
- Score: 4.674053902991301
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Improving adversarial robustness of neural networks remains a major
challenge. Fundamentally, training a network is a parameter estimation problem.
In adaptive control theory, maintaining persistency of excitation (PoE) is
integral to ensuring convergence of parameter estimates in dynamical systems to
their robust optima. In this work, we show that network training using gradient
descent is equivalent to a dynamical system parameter estimation problem.
Leveraging this relationship, we prove a sufficient condition for PoE of
gradient descent is achieved when the learning rate is less than the inverse of
the Lipschitz constant of the gradient of loss function. We provide an
efficient technique for estimating the corresponding Lipschitz constant using
extreme value theory and demonstrate that by only scaling the learning rate
schedule we can increase adversarial accuracy by up to 15% points on benchmark
datasets. Our approach also universally increases the adversarial accuracy by
0.1% to 0.3% points in various state-of-the-art adversarially trained models on
the AutoAttack benchmark, where every small margin of improvement is
significant.
Related papers
- Automatic debiasing of neural networks via moment-constrained learning [0.0]
Naively learning the regression function and taking a sample mean of the target functional results in biased estimators.
We propose moment-constrained learning as a new RR learning approach that addresses some shortcomings in automatic debiasing.
arXiv Detail & Related papers (2024-09-29T20:56:54Z) - Federated Smoothing Proximal Gradient for Quantile Regression with Non-Convex Penalties [3.269165283595478]
Distributed sensors in the internet-of-things (IoT) generate vast amounts of sparse data.
We propose a federated smoothing proximal gradient (G) algorithm that integrates a smoothing mechanism with the view, thereby both precision and computational speed.
arXiv Detail & Related papers (2024-08-10T21:50:19Z) - Guaranteed Trajectory Tracking under Learned Dynamics with Contraction Metrics and Disturbance Estimation [5.147919654191323]
This paper presents an approach to trajectory-centric learning control based on contraction metrics and disturbance estimation.
The proposed framework is validated on a planar quadrotor example.
arXiv Detail & Related papers (2021-12-15T15:57:33Z) - Dynamic Iterative Refinement for Efficient 3D Hand Pose Estimation [87.54604263202941]
We propose a tiny deep neural network of which partial layers are iteratively exploited for refining its previous estimations.
We employ learned gating criteria to decide whether to exit from the weight-sharing loop, allowing per-sample adaptation in our model.
Our method consistently outperforms state-of-the-art 2D/3D hand pose estimation approaches in terms of both accuracy and efficiency for widely used benchmarks.
arXiv Detail & Related papers (2021-11-11T23:31:34Z) - A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via
Adversarial Fine-tuning [90.44219200633286]
We propose a simple yet very effective adversarial fine-tuning approach based on a $textitslow start, fast decay$ learning rate scheduling strategy.
Experimental results show that the proposed adversarial fine-tuning approach outperforms the state-of-the-art methods on CIFAR-10, CIFAR-100 and ImageNet datasets.
arXiv Detail & Related papers (2020-12-25T20:50:15Z) - Implicit Under-Parameterization Inhibits Data-Efficient Deep
Reinforcement Learning [97.28695683236981]
More gradient updates decrease the expressivity of the current value network.
We demonstrate this phenomenon on Atari and Gym benchmarks, in both offline and online RL settings.
arXiv Detail & Related papers (2020-10-27T17:55:16Z) - Adaptive Gradient Method with Resilience and Momentum [120.83046824742455]
We propose an Adaptive Gradient Method with Resilience and Momentum (AdaRem)
AdaRem adjusts the parameter-wise learning rate according to whether the direction of one parameter changes in the past is aligned with the direction of the current gradient.
Our method outperforms previous adaptive learning rate-based algorithms in terms of the training speed and the test error.
arXiv Detail & Related papers (2020-10-21T14:49:00Z) - Cross Learning in Deep Q-Networks [82.20059754270302]
We propose a novel cross Q-learning algorithm, aim at alleviating the well-known overestimation problem in value-based reinforcement learning methods.
Our algorithm builds on double Q-learning, by maintaining a set of parallel models and estimate the Q-value based on a randomly selected network.
arXiv Detail & Related papers (2020-09-29T04:58:17Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.