Memory Efficient Adaptive Attention For Multiple Domain Learning
- URL: http://arxiv.org/abs/2110.10969v1
- Date: Thu, 21 Oct 2021 08:33:29 GMT
- Title: Memory Efficient Adaptive Attention For Multiple Domain Learning
- Authors: Himanshu Pradeep Aswani, Abhiraj Sunil Kanse, Shubhang Bhatnagar, Amit
Sethi
- Abstract summary: Training CNNs from scratch on new domains typically demands large numbers of labeled images and computations.
One way to reduce these requirements is to modularize the CNN architecture and freeze the weights of the heavier modules.
Recent studies have proposed alternative modular architectures and schemes that lead to a reduction in the number of trainable parameters needed.
- Score: 3.8907870897999355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training CNNs from scratch on new domains typically demands large numbers of
labeled images and computations, which is not suitable for low-power hardware.
One way to reduce these requirements is to modularize the CNN architecture and
freeze the weights of the heavier modules, that is, the lower layers after
pre-training. Recent studies have proposed alternative modular architectures
and schemes that lead to a reduction in the number of trainable parameters
needed to match the accuracy of fully fine-tuned CNNs on new domains. Our work
suggests that a further reduction in the number of trainable parameters by an
order of magnitude is possible. Furthermore, we propose that new modularization
techniques for multiple domain learning should also be compared on other
realistic metrics, such as the number of interconnections needed between the
fixed and trainable modules, the number of training samples needed, the order
of computations required and the robustness to partial mislabeling of the
training data. On all of these criteria, the proposed architecture demonstrates
advantages over or matches the current state-of-the-art.
Related papers
- Transferable Post-training via Inverse Value Learning [83.75002867411263]
We propose modeling changes at the logits level during post-training using a separate neural network (i.e., the value network)
After training this network on a small base model using demonstrations, this network can be seamlessly integrated with other pre-trained models during inference.
We demonstrate that the resulting value network has broad transferability across pre-trained models of different parameter sizes.
arXiv Detail & Related papers (2024-10-28T13:48:43Z) - Just How Flexible are Neural Networks in Practice? [89.80474583606242]
It is widely believed that a neural network can fit a training set containing at least as many samples as it has parameters.
In practice, however, we only find solutions via our training procedure, including the gradient and regularizers, limiting flexibility.
arXiv Detail & Related papers (2024-06-17T12:24:45Z) - SortedNet: A Scalable and Generalized Framework for Training Modular Deep Neural Networks [30.069353400127046]
We propose SortedNet to harness the inherent modularity of deep neural networks (DNNs)
SortedNet enables the training of sub-models simultaneously along with the training of the main model.
It is able to train 160 sub-models at once, achieving at least 96% of the original model's performance.
arXiv Detail & Related papers (2023-09-01T05:12:25Z) - Decouple Graph Neural Networks: Train Multiple Simple GNNs Simultaneously Instead of One [60.5818387068983]
Graph neural networks (GNN) suffer from severe inefficiency.
We propose to decouple a multi-layer GNN as multiple simple modules for more efficient training.
We show that the proposed framework is highly efficient with reasonable performance.
arXiv Detail & Related papers (2023-04-20T07:21:32Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - Continual Learning with Transformers for Image Classification [12.028617058465333]
In computer vision, neural network models struggle to continually learn new concepts without forgetting what has been learnt in the past.
We develop a solution called Adaptive Distillation of Adapters (ADA), which is developed to perform continual learning.
We empirically demonstrate on different classification tasks that this method maintains a good predictive performance without retraining the model.
arXiv Detail & Related papers (2022-06-28T15:30:10Z) - Compositional Models: Multi-Task Learning and Knowledge Transfer with
Modular Networks [13.308477955656592]
We propose a new approach for learning modular networks based on the isometric version of ResNet.
In our method, the modules can be invoked repeatedly and allow knowledge transfer to novel tasks.
We show that our method leads to interpretable self-organization of modules in case of multi-task learning, transfer learning and domain adaptation.
arXiv Detail & Related papers (2021-07-23T00:05:55Z) - Differentiable Architecture Pruning for Transfer Learning [6.935731409563879]
We propose a gradient-based approach for extracting sub-architectures from a given large model.
Our architecture-pruning scheme produces transferable new structures that can be successfully retrained to solve different tasks.
We provide theoretical convergence guarantees and validate the proposed transfer-learning strategy on real data.
arXiv Detail & Related papers (2021-07-07T17:44:59Z) - GradInit: Learning to Initialize Neural Networks for Stable and
Efficient Training [59.160154997555956]
We present GradInit, an automated and architecture method for initializing neural networks.
It is based on a simple agnostic; the variance of each network layer is adjusted so that a single step of SGD or Adam results in the smallest possible loss value.
It also enables training the original Post-LN Transformer for machine translation without learning rate warmup.
arXiv Detail & Related papers (2021-02-16T11:45:35Z) - Adaptive Quantization of Model Updates for Communication-Efficient
Federated Learning [75.45968495410047]
Communication of model updates between client nodes and the central aggregating server is a major bottleneck in federated learning.
Gradient quantization is an effective way of reducing the number of bits required to communicate each model update.
We propose an adaptive quantization strategy called AdaFL that aims to achieve communication efficiency as well as a low error floor.
arXiv Detail & Related papers (2021-02-08T19:14:21Z) - Regularized Adaptation for Stable and Efficient Continuous-Level
Learning on Image Processing Networks [7.730087303035803]
We propose a novel continuous-level learning framework using a Filter Transition Network (FTN)
FTN is a non-linear module that easily adapt to new levels, and is regularized to prevent undesirable side-effects.
Extensive results for various image processing indicate that the performance of FTN is stable in terms of adaptation and adaptation.
arXiv Detail & Related papers (2020-03-11T07:46:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.