State-driven Implicit Modeling for Sparsity and Robustness in Neural
Networks
- URL: http://arxiv.org/abs/2209.09389v1
- Date: Mon, 19 Sep 2022 23:58:48 GMT
- Title: State-driven Implicit Modeling for Sparsity and Robustness in Neural
Networks
- Authors: Alicia Y. Tsai, Juliette Decugis, Laurent El Ghaoui, Alper Atamt\"urk
- Abstract summary: We present a new approach to training implicit models, called State-driven Implicit Modeling (SIM)
SIM constrains the internal states and outputs to match that of a baseline model, circumventing costly backward computations.
We demonstrate how the SIM approach can be applied to significantly improve sparsity and robustness of baseline models trained on datasets.
- Score: 3.604879434384177
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Implicit models are a general class of learning models that forgo the
hierarchical layer structure typical in neural networks and instead define the
internal states based on an ``equilibrium'' equation, offering competitive
performance and reduced memory consumption. However, training such models
usually relies on expensive implicit differentiation for backward propagation.
In this work, we present a new approach to training implicit models, called
State-driven Implicit Modeling (SIM), where we constrain the internal states
and outputs to match that of a baseline model, circumventing costly backward
computations. The training problem becomes convex by construction and can be
solved in a parallel fashion, thanks to its decomposable structure. We
demonstrate how the SIM approach can be applied to significantly improve
sparsity (parameter reduction) and robustness of baseline models trained on
FashionMNIST and CIFAR-100 datasets.
Related papers
- Jointly Training and Pruning CNNs via Learnable Agent Guidance and Alignment [69.33930972652594]
We propose a novel structural pruning approach to jointly learn the weights and structurally prune architectures of CNN models.
The core element of our method is a Reinforcement Learning (RL) agent whose actions determine the pruning ratios of the CNN model's layers.
We conduct the joint training and pruning by iteratively training the model's weights and the agent's policy.
arXiv Detail & Related papers (2024-03-28T15:22:29Z) - Homotopy-based training of NeuralODEs for accurate dynamics discovery [0.0]
We develop a new training method for NeuralODEs, based on synchronization and homotopy optimization.
We show that synchronizing the model dynamics and the training data tames the originally irregular loss landscape.
Our method achieves competitive or better training loss while often requiring less than half the number of training epochs.
arXiv Detail & Related papers (2022-10-04T06:32:45Z) - Re-parameterizing Your Optimizers rather than Architectures [119.08740698936633]
We propose a novel paradigm of incorporating model-specific prior knowledge into Structurals and using them to train generic (simple) models.
As an implementation, we propose a novel methodology to add prior knowledge by modifying the gradients according to a set of model-specific hyper- parameters.
For a simple model trained with a Repr, we focus on a VGG-style plain model and showcase that such a simple model trained with a Repr, which is referred to as Rep-VGG, performs on par with the recent well-designed models.
arXiv Detail & Related papers (2022-05-30T16:55:59Z) - Regularized Sequential Latent Variable Models with Adversarial Neural
Networks [33.74611654607262]
We will present different ways of using high level latent random variables in RNN to model the variability in the sequential data.
We will explore possible ways of using adversarial method to train a variational RNN model.
arXiv Detail & Related papers (2021-08-10T08:05:14Z) - Stabilizing Equilibrium Models by Jacobian Regularization [151.78151873928027]
Deep equilibrium networks (DEQs) are a new class of models that eschews traditional depth in favor of finding the fixed point of a single nonlinear layer.
We propose a regularization scheme for DEQ models that explicitly regularizes the Jacobian of the fixed-point update equations to stabilize the learning of equilibrium models.
We show that this regularization adds only minimal computational cost, significantly stabilizes the fixed-point convergence in both forward and backward passes, and scales well to high-dimensional, realistic domains.
arXiv Detail & Related papers (2021-06-28T00:14:11Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - Learning Deep-Latent Hierarchies by Stacking Wasserstein Autoencoders [22.54887526392739]
We propose a novel approach to training models with deep-latent hierarchies based on Optimal Transport.
We show that our method enables the generative model to fully leverage its deep-latent hierarchy, avoiding the well known "latent variable collapse" issue of VAEs.
arXiv Detail & Related papers (2020-10-07T15:04:20Z) - Dynamic Model Pruning with Feedback [64.019079257231]
We propose a novel model compression method that generates a sparse trained model without additional overhead.
We evaluate our method on CIFAR-10 and ImageNet, and show that the obtained sparse models can reach the state-of-the-art performance of dense models.
arXiv Detail & Related papers (2020-06-12T15:07:08Z) - Conditional Neural Architecture Search [5.466990830092397]
It is often the case a well-trained ML model does not fit to the constraint of deploying edge platforms.
We propose a conditional neural architecture search method using GAN, which produces feasible ML models for different platforms.
arXiv Detail & Related papers (2020-06-06T20:39:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.