IoTCO2: Assessing the End-To-End Carbon Footprint of Internet-of-Things-Enabled Deep Learning
- URL: http://arxiv.org/abs/2403.10984v1
- Date: Sat, 16 Mar 2024 17:32:59 GMT
- Title: IoTCO2: Assessing the End-To-End Carbon Footprint of Internet-of-Things-Enabled Deep Learning
- Authors: Ahmad Faiz, Shahzeen Attari, Gayle Buck, Fan Chen, Lei Jiang,
- Abstract summary: Deep learning (DL) models are increasingly deployed on Internet of Things (IoT) devices for data processing.
Existing carbon footprint modeling tools neglect non-computing hardware components common in IoT devices.
This paper introduces textitcarb, an end-to-end modeling tool for precise carbon footprint estimation in IoT-enabled DL.
- Score: 6.1356834243468565
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To improve privacy and ensure quality-of-service (QoS), deep learning (DL) models are increasingly deployed on Internet of Things (IoT) devices for data processing, significantly increasing the carbon footprint associated with DL on IoT, covering both operational and embodied aspects. Existing operational energy predictors often overlook quantized DL models and emerging neural processing units (NPUs), while embodied carbon footprint modeling tools neglect non-computing hardware components common in IoT devices, creating a gap in accurate carbon footprint modeling tools for IoT-enabled DL. This paper introduces \textit{\carb}, an end-to-end modeling tool for precise carbon footprint estimation in IoT-enabled DL, demonstrating a maximum $\pm21\%$ deviation in carbon footprint values compared to actual measurements across various DL models. Additionally, practical applications of \carb are showcased through multiple user case studies.
Related papers
- When Parameter-efficient Tuning Meets General-purpose Vision-language
Models [65.19127815275307]
PETAL revolutionizes the training process by requiring only 0.5% of the total parameters, achieved through a unique mode approximation technique.
Our experiments reveal that PETAL not only outperforms current state-of-the-art methods in most scenarios but also surpasses full fine-tuning models in effectiveness.
arXiv Detail & Related papers (2023-12-16T17:13:08Z) - Defect Classification in Additive Manufacturing Using CNN-Based Vision
Processing [76.72662577101988]
This paper examines two scenarios: first, using convolutional neural networks (CNNs) to accurately classify defects in an image dataset from AM and second, applying active learning techniques to the developed classification model.
This allows the construction of a human-in-the-loop mechanism to reduce the size of the data required to train and generate training data.
arXiv Detail & Related papers (2023-07-14T14:36:58Z) - Exploring the Carbon Footprint of Hugging Face's ML Models: A Repository
Mining Study [8.409033836300761]
The study includes the first repository mining study on the Hugging Face Hub API on carbon emissions.
This study seeks to answer two research questions: (1) how do ML model creators measure and report carbon emissions on Hugging Face Hub?, and (2) what aspects impact the carbon emissions of training ML models?
arXiv Detail & Related papers (2023-05-18T17:52:58Z) - Machine Guided Discovery of Novel Carbon Capture Solvents [48.7576911714538]
Machine learning offers a promising method for reducing the time and resource burdens of materials development.
We have developed an end-to-end "discovery cycle" to select new aqueous amines compatible with the commercially viable acid gas scrubbing carbon capture.
The prediction process shows 60% accuracy against experiment for both material parameters and 80% for a single parameter on an external test set.
arXiv Detail & Related papers (2023-03-24T18:32:38Z) - Directed Acyclic Graph Factorization Machines for CTR Prediction via
Knowledge Distillation [65.62538699160085]
We propose a Directed Acyclic Graph Factorization Machine (KD-DAGFM) to learn the high-order feature interactions from existing complex interaction models for CTR prediction via Knowledge Distillation.
KD-DAGFM achieves the best performance with less than 21.5% FLOPs of the state-of-the-art method on both online and offline experiments.
arXiv Detail & Related papers (2022-11-21T03:09:42Z) - Measuring the Carbon Intensity of AI in Cloud Instances [91.28501520271972]
We provide a framework for measuring software carbon intensity, and propose to measure operational carbon emissions.
We evaluate a suite of approaches for reducing emissions on the Microsoft Azure cloud compute platform.
arXiv Detail & Related papers (2022-06-10T17:04:04Z) - A Transistor Operations Model for Deep Learning Energy Consumption
Scaling [14.856688747814912]
Deep Learning (DL) has transformed the automation of a wide range of industries and finds increasing ubiquity in society.
The increasing complexity of DL models and its widespread adoption has led to the energy consumption doubling every 3-4 months.
Current FLOPs and MACs based methods only consider the linear operations.
We develop a bottom-level Transistor Operations (TOs) method to expose the role of activation functions and neural network structure in energy consumption scaling with DL model configuration.
arXiv Detail & Related papers (2022-05-30T12:42:33Z) - Carbon Footprint of Selecting and Training Deep Learning Models for
Medical Image Analysis [0.2936007114555107]
We focus on the carbon footprint of developing deep learning models for medical image analysis (MIA)
We present and compare the features of four tools to quantify the carbon footprint of DL.
We discuss simple strategies to cut-down the environmental impact that can make model selection and training processes more efficient.
arXiv Detail & Related papers (2022-03-04T09:22:47Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - Curb Your Carbon Emissions: Benchmarking Carbon Emissions in Machine
Translation [0.0]
We study the carbon efficiency and look for alternatives to reduce the overall environmental impact of training models.
In our work, we assess the performance of models for machine translation, across multiple language pairs.
We examine the various components of these models to analyze aspects of our pipeline that can be optimized to reduce these carbon emissions.
arXiv Detail & Related papers (2021-09-26T12:30:10Z) - Carbontracker: Tracking and Predicting the Carbon Footprint of Training
Deep Learning Models [0.3441021278275805]
Machine learning (ML) may become a significant contributor to climate change if this exponential trend continues.
We propose that energy and carbon footprint of model development and training is reported alongside performance metrics using tools like Carbontracker.
arXiv Detail & Related papers (2020-07-06T20:24:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.