From Computation to Consumption: Exploring the Compute-Energy Link for Training and Testing Neural Networks for SED Systems
- URL: http://arxiv.org/abs/2409.05080v1
- Date: Sun, 8 Sep 2024 12:51:34 GMT
- Title: From Computation to Consumption: Exploring the Compute-Energy Link for Training and Testing Neural Networks for SED Systems
- Authors: Constance Douwes, Romain Serizel,
- Abstract summary: We study several neural network architectures that are key components of sound event detection systems.
We measure the energy consumption for training and testing small to large architectures.
We establish complex relationships between the energy consumption, the number of floating-point operations, the number of parameters, and the GPU/memory utilization.
- Score: 9.658615045493734
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The massive use of machine learning models, particularly neural networks, has raised serious concerns about their environmental impact. Indeed, over the last few years we have seen an explosion in the computing costs associated with training and deploying these systems. It is, therefore, crucial to understand their energy requirements in order to better integrate them into the evaluation of models, which has so far focused mainly on performance. In this paper, we study several neural network architectures that are key components of sound event detection systems, using an audio tagging task as an example. We measure the energy consumption for training and testing small to large architectures and establish complex relationships between the energy consumption, the number of floating-point operations, the number of parameters, and the GPU/memory utilization.
Related papers
- Measuring the Energy Consumption and Efficiency of Deep Neural Networks:
An Empirical Analysis and Design Recommendations [0.49478969093606673]
BUTTER-E dataset is an augmentation to the BUTTER Empirical Deep Learning dataset.
This dataset reveals the complex relationship between dataset size, network structure, and energy use.
We propose a straightforward and effective energy model that accounts for network size, computing, and memory hierarchy.
arXiv Detail & Related papers (2024-03-13T00:27:19Z) - The Power of Training: How Different Neural Network Setups Influence the Energy Demand [5.526611783155303]
This work offers a evaluation of the effects of variations in machine learning training regimes and learning paradigms on the energy consumption of computing, especially HPC hardware with a life-cycle aware perspective.
arXiv Detail & Related papers (2024-01-03T17:44:17Z) - Computation-efficient Deep Learning for Computer Vision: A Survey [121.84121397440337]
Deep learning models have reached or even exceeded human-level performance in a range of visual perception tasks.
Deep learning models usually demand significant computational resources, leading to impractical power consumption, latency, or carbon emissions in real-world scenarios.
New research focus is computationally efficient deep learning, which strives to achieve satisfactory performance while minimizing the computational cost during inference.
arXiv Detail & Related papers (2023-08-27T03:55:28Z) - Energy Efficiency of Training Neural Network Architectures: An Empirical
Study [11.325530936177493]
The evaluation of Deep Learning models has traditionally focused on criteria such as accuracy, F1 score, and related measures.
The computations needed to train such models entail a large carbon footprint.
We study the relations between DL model architectures and their environmental impact in terms of energy consumed and CO$$ emissions produced during training.
arXiv Detail & Related papers (2023-02-02T09:20:54Z) - Energy Consumption of Neural Networks on NVIDIA Edge Boards: an
Empirical Model [6.809944967863927]
Recently, there has been a trend of shifting the execution of deep learning inference tasks toward the edge of the network, closer to the user, to reduce latency and preserve data privacy.
In this work, we aim at profiling the energetic consumption of inference tasks for some modern edge nodes.
We have then distilled a simple, practical model that can provide an estimate of the energy consumption of a certain inference task on the considered boards.
arXiv Detail & Related papers (2022-10-04T14:12:59Z) - Constructing Neural Network-Based Models for Simulating Dynamical
Systems [59.0861954179401]
Data-driven modeling is an alternative paradigm that seeks to learn an approximation of the dynamics of a system using observations of the true system.
This paper provides a survey of the different ways to construct models of dynamical systems using neural networks.
In addition to the basic overview, we review the related literature and outline the most significant challenges from numerical simulations that this modeling paradigm must overcome.
arXiv Detail & Related papers (2021-11-02T10:51:42Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - Energy Drain of the Object Detection Processing Pipeline for Mobile
Devices: Analysis and Implications [77.00418462388525]
This paper presents the first detailed experimental study of a mobile augmented reality (AR) client's energy consumption and the detection latency of executing Convolutional Neural Networks (CNN) based object detection.
Our detailed measurements refine the energy analysis of mobile AR clients and reveal several interesting perspectives regarding the energy consumption of executing CNN-based object detection.
arXiv Detail & Related papers (2020-11-26T00:32:07Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Resource-Efficient Neural Networks for Embedded Systems [23.532396005466627]
We provide an overview of the current state of the art of machine learning techniques.
We focus on resource-efficient inference based on deep neural networks (DNNs), the predominant machine learning models of the past decade.
We substantiate our discussion with experiments on well-known benchmark data sets using compression techniques.
arXiv Detail & Related papers (2020-01-07T14:17:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.