AutoPCF: Efficient Product Carbon Footprint Accounting with Large
Language Models
- URL: http://arxiv.org/abs/2308.04241v2
- Date: Fri, 11 Aug 2023 06:09:37 GMT
- Title: AutoPCF: Efficient Product Carbon Footprint Accounting with Large
Language Models
- Authors: Zhu Deng, Jinjie Liu, Biao Luo, Can Yuan, Qingrun Yang, Lei Xiao,
Wenwen Zhou, Zhu Liu
- Abstract summary: We propose an automatic AI-driven PCF accounting framework, called AutoPCF, which applies deep learning algorithms to automatically match calculation parameters, and ultimately calculate the PCF.
The results demonstrate its potential in achieving automatic modeling and estimation of PCF with a large reduction in modeling time from days to minutes.
- Score: 5.875750227370339
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The product carbon footprint (PCF) is crucial for decarbonizing the supply
chain, as it measures the direct and indirect greenhouse gas emissions caused
by all activities during the product's life cycle. However, PCF accounting
often requires expert knowledge and significant time to construct life cycle
models. In this study, we test and compare the emergent ability of five large
language models (LLMs) in modeling the 'cradle-to-gate' life cycles of products
and generating the inventory data of inputs and outputs, revealing their
limitations as a generalized PCF knowledge database. By utilizing LLMs, we
propose an automatic AI-driven PCF accounting framework, called AutoPCF, which
also applies deep learning algorithms to automatically match calculation
parameters, and ultimately calculate the PCF. The results of estimating the
carbon footprint for three case products using the AutoPCF framework
demonstrate its potential in achieving automatic modeling and estimation of PCF
with a large reduction in modeling time from days to minutes.
Related papers
- Carbon Footprint Accounting Driven by Large Language Models and Retrieval-augmented Generation [3.428260237038657]
Traditional life cycle assessment methods rely heavily on human expertise, making near-real-time updates challenging.
This paper introduces a novel approach integrating large language models (LLMs) with retrieval-augmented generation technology to enhance the real-time, professional, and economical aspects of carbon footprint information retrieval and analysis.
arXiv Detail & Related papers (2024-08-19T06:05:24Z) - A Comparative Study of Machine Learning Algorithms for Anomaly Detection
in Industrial Environments: Performance and Environmental Impact [62.997667081978825]
This study seeks to address the demands of high-performance machine learning models with environmental sustainability.
Traditional machine learning algorithms, such as Decision Trees and Random Forests, demonstrate robust efficiency and performance.
However, superior outcomes were obtained with optimised configurations, albeit with a commensurate increase in resource consumption.
arXiv Detail & Related papers (2023-07-01T15:18:00Z) - Green Federated Learning [7.003870178055125]
Federated Learning (FL) is a machine learning technique for training a centralized model using data of decentralized entities.
FL may leverage as many as hundreds of millions of globally distributed end-user devices with diverse energy sources.
We propose the concept of Green FL, which involves optimizing FL parameters and making design choices to minimize carbon emissions.
arXiv Detail & Related papers (2023-03-26T02:23:38Z) - Counting Carbon: A Survey of Factors Influencing the Emissions of
Machine Learning [77.62876532784759]
Machine learning (ML) requires using energy to carry out computations during the model training process.
The generation of this energy comes with an environmental cost in terms of greenhouse gas emissions, depending on quantity used and the energy source.
We present a survey of the carbon emissions of 95 ML models across time and different tasks in natural language processing and computer vision.
arXiv Detail & Related papers (2023-02-16T18:35:00Z) - Towards Long-Term Time-Series Forecasting: Feature, Pattern, and
Distribution [57.71199089609161]
Long-term time-series forecasting (LTTF) has become a pressing demand in many applications, such as wind power supply planning.
Transformer models have been adopted to deliver high prediction capacity because of the high computational self-attention mechanism.
We propose an efficient Transformerbased model, named Conformer, which differentiates itself from existing methods for LTTF in three aspects.
arXiv Detail & Related papers (2023-01-05T13:59:29Z) - Estimating the Carbon Footprint of BLOOM, a 176B Parameter Language
Model [72.65502770895417]
We quantify the carbon footprint of BLOOM, a 176-billion parameter language model, across its life cycle.
We estimate that BLOOM's final training emitted approximately 24.7 tonnes ofcarboneqif we consider only the dynamic power consumption.
We conclude with a discussion regarding the difficulty of precisely estimating the carbon footprint of machine learning models.
arXiv Detail & Related papers (2022-11-03T17:13:48Z) - Measuring the Carbon Intensity of AI in Cloud Instances [91.28501520271972]
We provide a framework for measuring software carbon intensity, and propose to measure operational carbon emissions.
We evaluate a suite of approaches for reducing emissions on the Microsoft Azure cloud compute platform.
arXiv Detail & Related papers (2022-06-10T17:04:04Z) - Curb Your Carbon Emissions: Benchmarking Carbon Emissions in Machine
Translation [0.0]
We study the carbon efficiency and look for alternatives to reduce the overall environmental impact of training models.
In our work, we assess the performance of models for machine translation, across multiple language pairs.
We examine the various components of these models to analyze aspects of our pipeline that can be optimized to reduce these carbon emissions.
arXiv Detail & Related papers (2021-09-26T12:30:10Z) - Autoformer: Decomposition Transformers with Auto-Correlation for
Long-Term Series Forecasting [68.86835407617778]
Autoformer is a novel decomposition architecture with an Auto-Correlation mechanism.
In long-term forecasting, Autoformer yields state-of-the-art accuracy, with a relative improvement on six benchmarks.
arXiv Detail & Related papers (2021-06-24T13:43:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.