FedGreen: Carbon-aware Federated Learning with Model Size Adaptation
- URL: http://arxiv.org/abs/2404.15503v1
- Date: Tue, 23 Apr 2024 20:37:26 GMT
- Title: FedGreen: Carbon-aware Federated Learning with Model Size Adaptation
- Authors: Ali Abbasi, Fan Dong, Xin Wang, Henry Leung, Jiayu Zhou, Steve Drew,
- Abstract summary: Federated learning (FL) provides a promising collaborative framework to build a model from distributed clients.
Cloud and edge servers hosting FL clients may exhibit diverse carbon footprints influenced by their geographical locations with varying power sources.
We propose FedGreen, a carbon-aware FL approach to efficiently train models by adopting adaptive model sizes shared with clients.
- Score: 36.283273000969636
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) provides a promising collaborative framework to build a model from distributed clients, and this work investigates the carbon emission of the FL process. Cloud and edge servers hosting FL clients may exhibit diverse carbon footprints influenced by their geographical locations with varying power sources, offering opportunities to reduce carbon emissions by training local models with adaptive computations and communications. In this paper, we propose FedGreen, a carbon-aware FL approach to efficiently train models by adopting adaptive model sizes shared with clients based on their carbon profiles and locations using ordered dropout as a model compression technique. We theoretically analyze the trade-offs between the produced carbon emissions and the convergence accuracy, considering the carbon intensity discrepancy across countries to choose the parameters optimally. Empirical studies show that FedGreen can substantially reduce the carbon footprints of FL compared to the state-of-the-art while maintaining competitive model accuracy.
Related papers
- CarbonSense: A Multimodal Dataset and Baseline for Carbon Flux Modelling [9.05128569357374]
We present CarbonSense, the first machine learning-ready dataset for data-driven carbon flux modelling.
Our experiments illustrate the potential gains that multimodal deep learning techniques can bring to this domain.
arXiv Detail & Related papers (2024-06-07T13:47:40Z) - OpenCarbonEval: A Unified Carbon Emission Estimation Framework in Large-Scale AI Models [16.93272879722972]
OpenCarbonEval is a framework for integrating large-scale models across diverse modalities to predict carbon emissions.
We show that OpenCarbonEval achieves superior performance in predicting carbon emissions for both visual models and language models.
arXiv Detail & Related papers (2024-05-21T14:50:20Z) - Generative AI for Low-Carbon Artificial Intelligence of Things with Large Language Models [67.0243099823109]
Generative AI (GAI) holds immense potential to reduce carbon emissions of Artificial Intelligence of Things (AIoT)
In this article, we explore the potential of GAI for carbon emissions reduction and propose a novel GAI-enabled solution for low-carbon AIoT.
We propose a Large Language Model (LLM)-enabled carbon emission optimization framework, in which we design pluggable LLM and Retrieval Augmented Generation (RAG) modules.
arXiv Detail & Related papers (2024-04-28T05:46:28Z) - CAFE: Carbon-Aware Federated Learning in Geographically Distributed Data
Centers [18.54380015603228]
Training large-scale artificial intelligence (AI) models demands significant computational power and energy, leading to increased carbon footprint with potential environmental repercussions.
This paper delves into the challenges of training AI models across geographically distributed (geo-distributed) data centers, emphasizing the balance between learning performance and carbon footprint.
We propose a new framework called CAFE (short for Carbon-Aware Federated Learning) to optimize training within a fixed carbon footprint budget.
arXiv Detail & Related papers (2023-11-06T23:59:22Z) - Towards Green AI in Fine-tuning Large Language Models via Adaptive
Backpropagation [58.550710456745726]
Fine-tuning is the most effective way of adapting pre-trained large language models (LLMs) to downstream applications.
Existing techniques on efficient fine-tuning can only achieve limited reduction of such FLOPs.
We present GreenTrainer, a new technique that adaptively evaluates different tensors' backpropagation costs and contributions to the fine-tuned model accuracy.
arXiv Detail & Related papers (2023-09-22T21:55:18Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Green Federated Learning [7.003870178055125]
Federated Learning (FL) is a machine learning technique for training a centralized model using data of decentralized entities.
FL may leverage as many as hundreds of millions of globally distributed end-user devices with diverse energy sources.
We propose the concept of Green FL, which involves optimizing FL parameters and making design choices to minimize carbon emissions.
arXiv Detail & Related papers (2023-03-26T02:23:38Z) - Measuring the Carbon Intensity of AI in Cloud Instances [91.28501520271972]
We provide a framework for measuring software carbon intensity, and propose to measure operational carbon emissions.
We evaluate a suite of approaches for reducing emissions on the Microsoft Azure cloud compute platform.
arXiv Detail & Related papers (2022-06-10T17:04:04Z) - Curb Your Carbon Emissions: Benchmarking Carbon Emissions in Machine
Translation [0.0]
We study the carbon efficiency and look for alternatives to reduce the overall environmental impact of training models.
In our work, we assess the performance of models for machine translation, across multiple language pairs.
We examine the various components of these models to analyze aspects of our pipeline that can be optimized to reduce these carbon emissions.
arXiv Detail & Related papers (2021-09-26T12:30:10Z) - A Framework for Energy and Carbon Footprint Analysis of Distributed and
Federated Edge Learning [48.63610479916003]
This article breaks down and analyzes the main factors that influence the environmental footprint of distributed learning policies.
It models both vanilla and decentralized FL policies driven by consensus.
Results show that FL allows remarkable end-to-end energy savings (30%-40%) for wireless systems characterized by low bit/Joule efficiency.
arXiv Detail & Related papers (2021-03-18T16:04:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.