Which scaling rule applies to Artificial Neural Networks
- URL: http://arxiv.org/abs/2005.08942v8
- Date: Tue, 30 Nov 2021 21:08:20 GMT
- Title: Which scaling rule applies to Artificial Neural Networks
- Authors: J\'anos V\'egh
- Abstract summary: We show that cooperating and communicating computing systems, comprising segregated single processors, have severe performance limitations.
The paper starts from von Neumann's original model, without neglecting the transfer time apart from processing time; derives an appropriate interpretation and handling for Amdahl's law.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The experience shows that cooperating and communicating computing systems,
comprising segregated single processors, have severe performance limitations.
In his classic "First Draft" von Neumann warned that using a "too fast
processor" vitiates his simple "procedure" (but not his computing model!);
furthermore, that using the classic computing paradigm for imitating neuronal
operations, is unsound. Amdahl added that large machines, comprising many
processors, have an inherent disadvantage. Given that ANN's components are
heavily communicating with each other, they are built from a large number of
components designed/fabricated for use in conventional computing, furthermore
they attempt to mimic biological operation using improper technological
solutions, their achievable payload computing performance is conceptually
modest. The type of workload that AI-based systems generate leads to an
exceptionally low payload computational performance, and their
design/technology limits their size to just above the "toy" level systems: the
scaling of processor-based ANN systems is strongly nonlinear. Given the
proliferation and growing size of ANN systems, we suggest ideas to estimate in
advance the efficiency of the device or application. Through analyzing
published measurements we provide evidence that the role of data transfer time
drastically influences both ANNs performance and feasibility. It is discussed
how some major theoretical limiting factors, ANN's layer structure and their
methods of technical implementation of communication affect their efficiency.
The paper starts from von Neumann's original model, without neglecting the
transfer time apart from processing time; derives an appropriate interpretation
and handling for Amdahl's law. It shows that, in that interpretation, Amdahl's
Law correctly describes ANNs.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - SLaNC: Static LayerNorm Calibration [1.2016264781280588]
Quantization to lower precision formats naturally poses a number of challenges caused by the limited range of the available value representations.
In this article, we propose a computationally-efficient scaling technique that can be easily applied to Transformer models during inference.
Our method suggests a straightforward way of scaling the LayerNorm inputs based on the static weights of the immediately preceding linear layers.
arXiv Detail & Related papers (2024-10-14T14:32:55Z) - Weight Block Sparsity: Training, Compilation, and AI Engine Accelerators [0.0]
Deep Neural Networks (DNNs) are being developed, trained, and utilized, putting a strain on both advanced and limited devices.
Our solution is to implement em weight block sparsity, which is a structured sparsity that is friendly to hardware.
We will present performance estimates using accurate and complete code generation for AIE2 configuration sets (AMD Versal FPGAs) with Resnet50, Inception V3, and VGG16.
arXiv Detail & Related papers (2024-07-12T17:37:49Z) - Slimmable Encoders for Flexible Split DNNs in Bandwidth and Resource
Constrained IoT Systems [12.427821850039448]
We propose a novel split computing approach based on slimmable ensemble encoders.
The key advantage of our design is the ability to adapt computational load and transmitted data size in real-time with minimal overhead and time.
Our model outperforms existing solutions in terms of compression efficacy and execution time, especially in the context of weak mobile devices.
arXiv Detail & Related papers (2023-06-22T06:33:12Z) - Solving Large-scale Spatial Problems with Convolutional Neural Networks [88.31876586547848]
We employ transfer learning to improve training efficiency for large-scale spatial problems.
We propose that a convolutional neural network (CNN) can be trained on small windows of signals, but evaluated on arbitrarily large signals with little to no performance degradation.
arXiv Detail & Related papers (2023-06-14T01:24:42Z) - Deep learning applied to computational mechanics: A comprehensive
review, state of the art, and the classics [77.34726150561087]
Recent developments in artificial neural networks, particularly deep learning (DL), are reviewed in detail.
Both hybrid and pure machine learning (ML) methods are discussed.
History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics.
arXiv Detail & Related papers (2022-12-18T02:03:00Z) - Asynchronous Parallel Incremental Block-Coordinate Descent for
Decentralized Machine Learning [55.198301429316125]
Machine learning (ML) is a key technique for big-data-driven modelling and analysis of massive Internet of Things (IoT) based intelligent and ubiquitous computing.
For fast-increasing applications and data amounts, distributed learning is a promising emerging paradigm since it is often impractical or inefficient to share/aggregate data.
This paper studies the problem of training an ML model over decentralized systems, where data are distributed over many user devices.
arXiv Detail & Related papers (2022-02-07T15:04:15Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - A Tensor Compiler for Unified Machine Learning Prediction Serving [8.362773007171118]
Machine Learning (ML) adoption in the enterprise requires simpler and more efficient software infrastructure.
Model scoring is a primary contributor to infrastructure complexity and cost as models are trained once but used many times.
We propose HUMMINGBIRD, a novel approach to model scoring that compiles featurization operators and traditional ML models into a small set of tensor operations.
arXiv Detail & Related papers (2020-10-09T21:02:47Z) - How deep the machine learning can be [0.0]
Machine learning is mostly based on the conventional computing (processors)
This paper attempts to review some of the caveats, especially concerning scaling the computing performance of the AI solutions.
arXiv Detail & Related papers (2020-05-02T16:06:31Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.