Efficient Online Processing with Deep Neural Networks
- URL: http://arxiv.org/abs/2306.13474v1
- Date: Fri, 23 Jun 2023 12:29:44 GMT
- Title: Efficient Online Processing with Deep Neural Networks
- Authors: Lukas Hedegaard
- Abstract summary: This dissertation is dedicated to the neural network efficiency. Specifically, a core contribution addresses the efficiency aspects during online inference.
These advances are attained through a bottomup computational reorganization and judicious architectural modifications.
- Score: 1.90365714903665
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The capabilities and adoption of deep neural networks (DNNs) grow at an
exhilarating pace: Vision models accurately classify human actions in videos
and identify cancerous tissue in medical scans as precisely than human experts;
large language models answer wide-ranging questions, generate code, and write
prose, becoming the topic of everyday dinner-table conversations. Even though
their uses are exhilarating, the continually increasing model sizes and
computational complexities have a dark side. The economic cost and negative
environmental externalities of training and serving models is in evident
disharmony with financial viability and climate action goals.
Instead of pursuing yet another increase in predictive performance, this
dissertation is dedicated to the improvement of neural network efficiency.
Specifically, a core contribution addresses the efficiency aspects during
online inference. Here, the concept of Continual Inference Networks (CINs) is
proposed and explored across four publications. CINs extend prior
state-of-the-art methods developed for offline processing of spatio-temporal
data and reuse their pre-trained weights, improving their online processing
efficiency by an order of magnitude. These advances are attained through a
bottom-up computational reorganization and judicious architectural
modifications. The benefit to online inference is demonstrated by reformulating
several widely used network architectures into CINs, including 3D CNNs,
ST-GCNs, and Transformer Encoders. An orthogonal contribution tackles the
concurrent adaptation and computational acceleration of a large source model
into multiple lightweight derived models. Drawing on fusible adapter networks
and structured pruning, Structured Pruning Adapters achieve superior predictive
accuracy under aggressive pruning using significantly fewer learned weights
compared to fine-tuning with pruning.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - Power-Enhanced Residual Network for Function Approximation and Physics-Informed Inverse Problems [0.0]
This paper introduces a novel neural network structure called the Power-Enhancing residual network.
It improves the network's capabilities for both smooth and non-smooth functions approximation in 2D and 3D settings.
Results emphasize the exceptional accuracy of the proposed Power-Enhancing residual network, particularly for non-smooth functions.
arXiv Detail & Related papers (2023-10-24T10:01:15Z) - Mobile Traffic Prediction at the Edge through Distributed and Transfer
Learning [2.687861184973893]
The research in this topic concentrated on making predictions in a centralized fashion, by collecting data from the different network elements.
We propose a novel prediction framework based on edge computing which uses datasets obtained on the edge through a large measurement campaign.
arXiv Detail & Related papers (2023-10-22T23:48:13Z) - Entropy-based Guidance of Deep Neural Networks for Accelerated Convergence and Improved Performance [0.8749675983608172]
We derive new mathematical results to measure the changes in entropy as fully-connected and convolutional neural networks process data.
By measuring the change in entropy as networks process data effectively, patterns critical to a well-performing network can be visualized and identified.
Experiments in image compression, image classification, and image segmentation on benchmark datasets demonstrate these losses guide neural networks to learn rich latent data representations in fewer dimensions.
arXiv Detail & Related papers (2023-08-28T23:33:07Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - A Faster Approach to Spiking Deep Convolutional Neural Networks [0.0]
Spiking neural networks (SNNs) have closer dynamics to the brain than current deep neural networks.
We propose a network structure based on previous work to improve network runtime and accuracy.
arXiv Detail & Related papers (2022-10-31T16:13:15Z) - Understanding the Effects of Data Parallelism and Sparsity on Neural
Network Training [126.49572353148262]
We study two factors in neural network training: data parallelism and sparsity.
Despite their promising benefits, understanding of their effects on neural network training remains elusive.
arXiv Detail & Related papers (2020-03-25T10:49:22Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.