Maintaining Performance with Less Data
- URL: http://arxiv.org/abs/2208.02007v1
- Date: Wed, 3 Aug 2022 12:22:18 GMT
- Title: Maintaining Performance with Less Data
- Authors: Dominic Sanderson, Tatiana Kalgonova
- Abstract summary: We propose a novel method for training a neural network for image classification to reduce input data dynamically.
We show that accuracy may be maintained while reducing runtime by up to 50%, and reducing carbon emission proportionally.
- Score: 12.54745966896411
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel method for training a neural network for image
classification to reduce input data dynamically, in order to reduce the costs
of training a neural network model. As Deep Learning tasks become more popular,
their computational complexity increases, leading to more intricate algorithms
and models which have longer runtimes and require more input data. The result
is a greater cost on time, hardware, and environmental resources. By using data
reduction techniques, we reduce the amount of work performed, and therefore the
environmental impact of AI techniques, and with dynamic data reduction we show
that accuracy may be maintained while reducing runtime by up to 50%, and
reducing carbon emission proportionally.
Related papers
- An In-Depth Analysis of Data Reduction Methods for Sustainable Deep Learning [0.15833270109954137]
We present up to eight different methods to reduce the size of a training dataset.
We also develop a Python package to apply them.
We experimentally compare how these data reduction methods affect the representativeness of the reduced dataset.
arXiv Detail & Related papers (2024-03-22T12:06:40Z) - Fast-NTK: Parameter-Efficient Unlearning for Large-Scale Models [17.34908967455907]
machine unlearning'' proposes the selective removal of unwanted data without the need for retraining from scratch.
Fast-NTK is a novel NTK-based unlearning algorithm that significantly reduces the computational complexity.
arXiv Detail & Related papers (2023-12-22T18:55:45Z) - Gradual Optimization Learning for Conformational Energy Minimization [69.36925478047682]
Gradual Optimization Learning Framework (GOLF) for energy minimization with neural networks significantly reduces the required additional data.
Our results demonstrate that the neural network trained with GOLF performs on par with the oracle on a benchmark of diverse drug-like molecules.
arXiv Detail & Related papers (2023-11-05T11:48:08Z) - Actively Learning Costly Reward Functions for Reinforcement Learning [56.34005280792013]
We show that it is possible to train agents in complex real-world environments orders of magnitudes faster.
By enabling the application of reinforcement learning methods to new domains, we show that we can find interesting and non-trivial solutions.
arXiv Detail & Related papers (2022-11-23T19:17:20Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep
Neural Network, a Survey [69.3939291118954]
State-of-the-art deep learning models have a parameter count that reaches into the billions. Training, storing and transferring such models is energy and time consuming, thus costly.
Model compression lowers storage and transfer costs, and can further make training more efficient by decreasing the number of computations in the forward and/or backward pass.
This work is a survey on methods which reduce the number of trained weights in deep learning models throughout the training.
arXiv Detail & Related papers (2022-05-17T05:37:08Z) - Transformer Networks for Data Augmentation of Human Physical Activity
Recognition [61.303828551910634]
State of the art models like Recurrent Generative Adrial Networks (RGAN) are used to generate realistic synthetic data.
In this paper, transformer based generative adversarial networks which have global attention on data, are compared on PAMAP2 and Real World Human Activity Recognition data sets with RGAN.
arXiv Detail & Related papers (2021-09-02T16:47:29Z) - Low-Precision Training in Logarithmic Number System using Multiplicative
Weight Update [49.948082497688404]
Training large-scale deep neural networks (DNNs) currently requires a significant amount of energy, leading to serious environmental impacts.
One promising approach to reduce the energy costs is representing DNNs with low-precision numbers.
We jointly design a lowprecision training framework involving a logarithmic number system (LNS) and a multiplicative weight update training method, termed LNS-Madam.
arXiv Detail & Related papers (2021-06-26T00:32:17Z) - Data optimization for large batch distributed training of deep neural
networks [0.19336815376402716]
Current practice for distributed training of deep neural networks faces the challenges of communication bottlenecks when operating at scale.
We propose a data optimization approach that utilize machine learning to implicitly smooth out the loss landscape resulting in fewer local minima.
Our approach filters out data points which are less important to feature learning, enabling us to speed up the training of models on larger batch sizes to improved accuracy.
arXiv Detail & Related papers (2020-12-16T21:22:02Z) - Dynamic Hard Pruning of Neural Networks at the Edge of the Internet [11.605253906375424]
Dynamic Hard Pruning (DynHP) technique incrementally prunes the network during training.
DynHP enables a tunable size reduction of the final neural network and reduces the NN memory occupancy during training.
Freed memory is reused by a emphdynamic batch sizing approach to counterbalance the accuracy degradation caused by the hard pruning strategy.
arXiv Detail & Related papers (2020-11-17T10:23:28Z) - Efficient Training of Deep Convolutional Neural Networks by Augmentation
in Embedding Space [24.847651341371684]
In applications where data are scarce, transfer learning and data augmentation techniques are commonly used to improve the generalization of deep learning models.
Fine-tuning a transfer model with data augmentation in the raw input space has a high computational cost to run the full network for every augmented input.
We propose a method that replaces the augmentation in the raw input space with an approximate one that acts purely in the embedding space.
arXiv Detail & Related papers (2020-02-12T03:26:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.