Energy Estimates Across Layers of Computing: From Devices to Large-Scale
Applications in Machine Learning for Natural Language Processing, Scientific
Computing, and Cryptocurrency Mining
- URL: http://arxiv.org/abs/2310.07516v1
- Date: Wed, 11 Oct 2023 14:14:05 GMT
- Title: Energy Estimates Across Layers of Computing: From Devices to Large-Scale
Applications in Machine Learning for Natural Language Processing, Scientific
Computing, and Cryptocurrency Mining
- Authors: Sadasivan Shankar
- Abstract summary: Estimates of energy usage in layers of computing from devices to algorithms have been determined and analyzed.
Three large-scale computing applications such as Artificial Intelligence (AI)/Machine Learning for Natural Language Processing, Scientific Simulations, and Cryptocurrency Mining have been estimated.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Estimates of energy usage in layers of computing from devices to algorithms
have been determined and analyzed. Building on the previous analysis [3],
energy needed from single devices and systems including three large-scale
computing applications such as Artificial Intelligence (AI)/Machine Learning
for Natural Language Processing, Scientific Simulations, and Cryptocurrency
Mining have been estimated. In contrast to the bit-level switching, in which
transistors achieved energy efficiency due to geometrical scaling, higher
energy is expended both at the at the instructions and simulations levels of an
application. Additionally, the analysis based on AI/ML Accelerators indicate
that changes in architectures using an older semiconductor technology node have
comparable energy efficiency with a different architecture using a newer
technology. Further comparisons of the energy in computing systems with the
thermodynamic and biological limits, indicate that there is a 27-36 orders of
magnitude higher energy requirements for total simulation of an application.
These energy estimates underscore the need for serious considerations of energy
efficiency in computing by including energy as a design parameter, enabling
growing needs of compute-intensive applications in a digital world.
Related papers
- Present and Future of AI in Renewable Energy Domain : A Comprehensive Survey [0.0]
Artificial intelligence (AI) has become a crucial instrument for streamlining processes in various industries.
Nine AI-based strategies are identified here to assist Renewable Energy (RE) in contemporary power systems.
This study also addressed three main topics: using AI technology for renewable power generation, utilizing AI for renewable energy forecasting, and optimizing energy systems.
arXiv Detail & Related papers (2024-06-22T04:36:09Z) - Quantum Computing Enhanced Service Ecosystem for Simulation in Manufacturing [56.61654656648898]
We propose a framework for a quantum computing-enhanced service ecosystem for simulation in manufacturing.
We analyse two high-value use cases with the aim of a quantitative evaluation of these new computing paradigms for industrially-relevant settings.
arXiv Detail & Related papers (2024-01-19T11:04:14Z) - Deep Photonic Reservoir Computer for Speech Recognition [49.1574468325115]
Speech recognition is a critical task in the field of artificial intelligence and has witnessed remarkable advancements.
Deep reservoir computing is energy efficient but exhibits limitations in performance when compared to more resource-intensive machine learning algorithms.
We propose a photonic-based deep reservoir computer and evaluate its effectiveness on different speech recognition tasks.
arXiv Detail & Related papers (2023-12-11T17:43:58Z) - On the Opportunities of Green Computing: A Survey [80.21955522431168]
Artificial Intelligence (AI) has achieved significant advancements in technology and research with the development over several decades.
The needs for high computing power brings higher carbon emission and undermines research fairness.
To tackle the challenges of computing resources and environmental impact of AI, Green Computing has become a hot research topic.
arXiv Detail & Related papers (2023-11-01T11:16:41Z) - Computation-efficient Deep Learning for Computer Vision: A Survey [121.84121397440337]
Deep learning models have reached or even exceeded human-level performance in a range of visual perception tasks.
Deep learning models usually demand significant computational resources, leading to impractical power consumption, latency, or carbon emissions in real-world scenarios.
New research focus is computationally efficient deep learning, which strives to achieve satisfactory performance while minimizing the computational cost during inference.
arXiv Detail & Related papers (2023-08-27T03:55:28Z) - Energy-frugal and Interpretable AI Hardware Design using Learning
Automata [5.514795777097036]
A new machine learning algorithm, called the Tsetlin machine, has been proposed.
In this paper, we investigate methods of energy-frugal artificial intelligence hardware design.
We show that frugal resource allocation can provide decisive energy reduction while also achieving robust and interpretable learning.
arXiv Detail & Related papers (2023-05-19T15:11:18Z) - Energy Transformer [64.22957136952725]
Our work combines aspects of three promising paradigms in machine learning, namely, attention mechanism, energy-based models, and associative memory.
We propose a novel architecture, called the Energy Transformer (or ET for short), that uses a sequence of attention layers that are purposely designed to minimize a specifically engineered energy function.
arXiv Detail & Related papers (2023-02-14T18:51:22Z) - Precise Energy Consumption Measurements of Heterogeneous Artificial
Intelligence Workloads [0.534434568021034]
We present measurements of the energy consumption of two typical applications of deep learning models on different types of compute nodes.
One advantage of our approach is that the information on energy consumption is available to all users of the supercomputer.
arXiv Detail & Related papers (2022-12-03T21:40:55Z) - Trends in Energy Estimates for Computing in AI/Machine Learning
Accelerators, Supercomputers, and Compute-Intensive Applications [3.2634122554914]
We examine the computational energy requirements of different systems driven by the geometrical scaling law.
We show that energy efficiency due to geometrical scaling is slowing down.
At the application level, general-purpose AI-ML methods can be computationally energy intensive.
arXiv Detail & Related papers (2022-10-12T16:14:33Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Learnergy: Energy-based Machine Learners [0.0]
Machine learning techniques have been broadly encouraged in the context of deep learning architectures.
An exciting algorithm denoted as Restricted Boltzmann Machine relies on energy- and probabilistic-based nature to tackle the most diverse applications, such as classification, reconstruction, and generation of images and signals.
arXiv Detail & Related papers (2020-03-16T21:14:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.