Thermodynamic Cost of Edge Detection in Artificial Neural
Network(ANN)-Based Processors
- URL: http://arxiv.org/abs/2003.08196v2
- Date: Thu, 29 Oct 2020 10:15:49 GMT
- Title: Thermodynamic Cost of Edge Detection in Artificial Neural
Network(ANN)-Based Processors
- Authors: Se\c{c}kin Bar{\i}\c{s}{\i}k and \.Ilke Ercan
- Abstract summary: We study architectural-level contributions to energy dissipation in an Artificial Neural Network (ANN)-based processor that is trained to perform edge-detection task.
Results reveal the inherent efficiency advantages of an ANN network trained for specific tasks over general-purpose processors based on von Neumann architecture.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Architecture-based heat dissipation analyses allow us to reveal fundamental
sources of inefficiency in a given processor and thereby provide us with
road-maps to design less dissipative computing schemes independent of
technology-base used to implement them. In this work, we study
architectural-level contributions to energy dissipation in an Artificial Neural
Network (ANN)-based processor that is trained to perform edge-detection task.
We compare the training and information processing cost of ANN to that of
conventional architectures and algorithms using 64-pixel binary image. Our
results reveal the inherent efficiency advantages of an ANN network trained for
specific tasks over general-purpose processors based on von Neumann
architecture. We also compare the proposed performance improvements to that of
Cellular Array Processors (CAPs) and illustrate the reduction in dissipation
for special purpose processors. Lastly, we calculate the change in dissipation
as a result of input data structure and show the effect of randomness on
energetic cost of information processing. The results we obtained provide a
basis for comparison for task-based fundamental energy efficiency analyses for
a range of processors and therefore contribute to the study of
architecture-level descriptions of processors and thermodynamic cost
calculations based on physics of computation.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Neuromorphic on-chip reservoir computing with spiking neural network architectures [0.562479170374811]
Reservoir computing is a promising approach for harnessing the computational power of recurrent neural networks.
This paper investigates the application of integrate-and-fire neurons within reservoir computing frameworks for two distinct tasks.
We study the reservoir computing performance using a custom integrate-and-fire code, Intel's Lava neuromorphic computing software framework, and via an on-chip implementation in Loihi.
arXiv Detail & Related papers (2024-07-30T05:05:09Z) - Mechanistic Design and Scaling of Hybrid Architectures [114.3129802943915]
We identify and test new hybrid architectures constructed from a variety of computational primitives.
We experimentally validate the resulting architectures via an extensive compute-optimal and a new state-optimal scaling law analysis.
We find MAD synthetics to correlate with compute-optimal perplexity, enabling accurate evaluation of new architectures.
arXiv Detail & Related papers (2024-03-26T16:33:12Z) - Free-Space Optical Spiking Neural Network [0.0]
We introduce the Free-space Optical deep Spiking Convolutional Neural Network (OSCNN)
This novel approach draws inspiration from computational models of the human eye.
Our results demonstrate promising performance with minimal latency and power consumption compared to their electronic ONN counterparts.
arXiv Detail & Related papers (2023-11-08T09:41:14Z) - Neural network scoring for efficient computing [0.9124662097191377]
We introduce a composite score that aims to characterize the trade-off between accuracy and power consumption measured during the inference of neural networks.
To our best knowledge, it is the first fit test for neural architectures on hardware architectures.
arXiv Detail & Related papers (2023-10-14T10:29:52Z) - Computation-efficient Deep Learning for Computer Vision: A Survey [121.84121397440337]
Deep learning models have reached or even exceeded human-level performance in a range of visual perception tasks.
Deep learning models usually demand significant computational resources, leading to impractical power consumption, latency, or carbon emissions in real-world scenarios.
New research focus is computationally efficient deep learning, which strives to achieve satisfactory performance while minimizing the computational cost during inference.
arXiv Detail & Related papers (2023-08-27T03:55:28Z) - Ps and Qs: Quantization-aware pruning for efficient low latency neural
network inference [56.24109486973292]
We study the interplay between pruning and quantization during the training of neural networks for ultra low latency applications.
We find that quantization-aware pruning yields more computationally efficient models than either pruning or quantization alone for our task.
arXiv Detail & Related papers (2021-02-22T19:00:05Z) - Large-scale neuromorphic optoelectronic computing with a reconfigurable
diffractive processing unit [38.898230519968116]
We propose an optoelectronic reconfigurable computing paradigm by constructing a diffractive processing unit.
It can efficiently support different neural networks and achieve a high model complexity with millions of neurons.
Our prototype system built with off-the-shelf optoelectronic components surpasses the performance of state-of-the-art graphics processing units.
arXiv Detail & Related papers (2020-08-26T16:34:58Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Near-Optimal Hardware Design for Convolutional Neural Networks [0.0]
This study proposes a novel, special-purpose, and high-efficiency hardware architecture for convolutional neural networks.
The proposed architecture maximizes the utilization of multipliers by designing the computational circuit with the same structure as that of the computational flow of the model.
An implementation based on the proposed hardware architecture has been applied in commercial AI products.
arXiv Detail & Related papers (2020-02-06T09:15:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.