Model-Based Robust Deep Learning: Generalizing to Natural,
Out-of-Distribution Data
- URL: http://arxiv.org/abs/2005.10247v2
- Date: Mon, 2 Nov 2020 13:20:37 GMT
- Title: Model-Based Robust Deep Learning: Generalizing to Natural,
Out-of-Distribution Data
- Authors: Alexander Robey, Hamed Hassani, George J. Pappas
- Abstract summary: We propose a paradigm shift from perturbation-based adversarial robustness toward model-based robust deep learning.
Our objective is to provide general training algorithms that can be used to train deep neural networks to be robust against natural variation in data.
- Score: 104.69689574851724
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While deep learning has resulted in major breakthroughs in many application
domains, the frameworks commonly used in deep learning remain fragile to
artificially-crafted and imperceptible changes in the data. In response to this
fragility, adversarial training has emerged as a principled approach for
enhancing the robustness of deep learning with respect to norm-bounded
perturbations. However, there are other sources of fragility for deep learning
that are arguably more common and less thoroughly studied. Indeed, natural
variation such as lighting or weather conditions can significantly degrade the
accuracy of trained neural networks, proving that such natural variation
presents a significant challenge for deep learning.
In this paper, we propose a paradigm shift from perturbation-based
adversarial robustness toward model-based robust deep learning. Our objective
is to provide general training algorithms that can be used to train deep neural
networks to be robust against natural variation in data. Critical to our
paradigm is first obtaining a model of natural variation which can be used to
vary data over a range of natural conditions. Such models may be either known a
priori or else learned from data. In the latter case, we show that deep
generative models can be used to learn models of natural variation that are
consistent with realistic conditions. We then exploit such models in three
novel model-based robust training algorithms in order to enhance the robustness
of deep learning with respect to the given model. Our extensive experiments
show that across a variety of naturally-occurring conditions and across various
datasets, deep neural networks trained with our model-based algorithms
significantly outperform both standard deep learning algorithms as well as
norm-bounded robust deep learning algorithms.
Related papers
- Learning to Continually Learn with the Bayesian Principle [36.75558255534538]
In this work, we adopt the meta-learning paradigm to combine the strong representational power of neural networks and simple statistical models' robustness to forgetting.
Since the neural networks remain fixed during continual learning, they are protected from catastrophic forgetting.
arXiv Detail & Related papers (2024-05-29T04:53:31Z) - Optimizing Dense Feed-Forward Neural Networks [0.0]
We propose a novel feed-forward neural network constructing method based on pruning and transfer learning.
Our approach can compress the number of parameters by more than 70%.
We also evaluate the transfer learning level comparing the refined model and the original one training from scratch a neural network.
arXiv Detail & Related papers (2023-12-16T23:23:16Z) - Neuro-symbolic model for cantilever beams damage detection [0.0]
We propose a neuro-symbolic model for the detection of damages in cantilever beams based on a novel cognitive architecture.
The hybrid discriminative model is introduced under the name Logic Convolutional Neural Regressor.
arXiv Detail & Related papers (2023-05-04T13:12:39Z) - Characterizing and overcoming the greedy nature of learning in
multi-modal deep neural networks [62.48782506095565]
We show that due to the greedy nature of learning in deep neural networks, models tend to rely on just one modality while under-fitting the other modalities.
We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning.
arXiv Detail & Related papers (2022-02-10T20:11:21Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Factorized Deep Generative Models for Trajectory Generation with
Spatiotemporal-Validity Constraints [10.960924101404498]
Deep generative models for trajectory data can learn expressively explanatory models for sophisticated latent patterns.
We first propose novel deep generative models factorizing time-variant and time-invariant latent variables.
We then develop new inference strategies based on variational inference and constrained optimization to thetemporal validity.
arXiv Detail & Related papers (2020-09-20T02:06:36Z) - Learning perturbation sets for robust machine learning [97.6757418136662]
We use a conditional generator that defines the perturbation set over a constrained region of the latent space.
We measure the quality of our learned perturbation sets both quantitatively and qualitatively.
We leverage our learned perturbation sets to train models which are empirically and certifiably robust to adversarial image corruptions and adversarial lighting variations.
arXiv Detail & Related papers (2020-07-16T16:39:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.