Supply chain emission estimation using large language models
- URL: http://arxiv.org/abs/2308.01741v1
- Date: Thu, 3 Aug 2023 13:06:37 GMT
- Title: Supply chain emission estimation using large language models
- Authors: Ayush Jain, Manikandan Padmanaban, Jagabondhu Hazra, Shantanu Godbole,
Kommy Weldemariam
- Abstract summary: We propose a first-of-a-kind framework that uses domain-adapted NLP foundation models to estimate Scope 3 emissions.
We compare the performance of the proposed framework with the state-of-the-art text classification models such as TF-IDF, word2Vec, and Zero shot learning.
- Score: 15.605998085195314
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large enterprises face a crucial imperative to achieve the Sustainable
Development Goals (SDGs), especially goal 13, which focuses on combating
climate change and its impacts. To mitigate the effects of climate change,
reducing enterprise Scope 3 (supply chain emissions) is vital, as it accounts
for more than 90\% of total emission inventories. However, tracking Scope 3
emissions proves challenging, as data must be collected from thousands of
upstream and downstream suppliers.To address the above mentioned challenges, we
propose a first-of-a-kind framework that uses domain-adapted NLP foundation
models to estimate Scope 3 emissions, by utilizing financial transactions as a
proxy for purchased goods and services. We compared the performance of the
proposed framework with the state-of-art text classification models such as
TF-IDF, word2Vec, and Zero shot learning. Our results show that the
domain-adapted foundation model outperforms state-of-the-art text mining
techniques and performs as well as a subject matter expert (SME). The proposed
framework could accelerate the Scope 3 estimation at Enterprise scale and will
help to take appropriate climate actions to achieve SDG 13.
Related papers
- FinTSB: A Comprehensive and Practical Benchmark for Financial Time Series Forecasting [58.70072722290475]
Financial time series (FinTS) record the behavior of human-brain-augmented decision-making.
FinTSB is a comprehensive and practical benchmark for financial time series forecasting.
arXiv Detail & Related papers (2025-02-26T05:19:16Z) - Unveiling Environmental Impacts of Large Language Model Serving: A Functional Unit View [2.5832043241251337]
Large language models (LLMs) offer powerful capabilities but come with significant environmental costs, particularly in carbon emissions.
We introduce the concept of a functional unit (FU) and develop FUEL, the first FU-based framework for evaluating LLM's environmental impact.
Our findings highlight the potential for reducing carbon emissions by optimizing model selection, deployment strategies, and hardware choices.
arXiv Detail & Related papers (2025-02-16T20:20:18Z) - Group Reasoning Emission Estimation Networks [11.479035866165926]
We introduce an AI-driven carbon accounting framework that standardizes enterprise-level emission estimation.
We use a novel reasoning approach with large language models (LLMs)
Experiments on 1,114 NAICS categories yield state-of-the-art performance.
arXiv Detail & Related papers (2025-02-08T09:02:43Z) - Improving Power Plant CO2 Emission Estimation with Deep Learning and Satellite/Simulated Data [0.0]
CO2 emissions from power plants, as significant super emitters, substantially contribute to global warming.
This study addresses challenges by expanding the available dataset through the integration of NO2 data from Sentinel-5P, generating continuous XCO2 maps, and incorporating real satellite observations from OCO-2/3 for over 71 power plants in data-scarce regions.
arXiv Detail & Related papers (2025-02-04T08:05:15Z) - The Dual-use Dilemma in LLMs: Do Empowering Ethical Capacities Make a Degraded Utility? [54.18519360412294]
Large Language Models (LLMs) must balance between rejecting harmful requests for safety and accommodating legitimate ones for utility.
This paper presents a Direct Preference Optimization (DPO) based alignment framework that achieves better overall performance.
We analyze experimental results obtained from testing DeepSeek-R1 on our benchmark and reveal the critical ethical concerns raised by this highly acclaimed model.
arXiv Detail & Related papers (2025-01-20T06:35:01Z) - Towards Universal Large-Scale Foundational Model for Natural Gas Demand Forecasting [12.60741035434783]
We propose the first foundation model specifically tailored for natural gas demand forecasting.
Our approach leverages contrastive learning to improve prediction accuracy in real-world scenarios.
We conducted extensive experiments using a large-scale dataset from ENN Group.
arXiv Detail & Related papers (2024-09-24T06:44:29Z) - Efficient Localized Adaptation of Neural Weather Forecasting: A Case Study in the MENA Region [62.09891513612252]
We focus on limited-area modeling and train our model specifically for localized region-level downstream tasks.
We consider the MENA region due to its unique climatic challenges, where accurate localized weather forecasting is crucial for managing water resources, agriculture and mitigating the impacts of extreme weather events.
Our study aims to validate the effectiveness of integrating parameter-efficient fine-tuning (PEFT) methodologies, specifically Low-Rank Adaptation (LoRA) and its variants, to enhance forecast accuracy, as well as training speed, computational resource utilization, and memory efficiency in weather and climate modeling for specific regions.
arXiv Detail & Related papers (2024-09-11T19:31:56Z) - MambaDS: Near-Surface Meteorological Field Downscaling with Topography Constrained Selective State Space Modeling [68.69647625472464]
Downscaling, a crucial task in meteorological forecasting, enables the reconstruction of high-resolution meteorological states for target regions.
Previous downscaling methods lacked tailored designs for meteorology and encountered structural limitations.
We propose a novel model called MambaDS, which enhances the utilization of multivariable correlations and topography information.
arXiv Detail & Related papers (2024-08-20T13:45:49Z) - Revisiting Catastrophic Forgetting in Large Language Model Tuning [79.70722658190097]
Catastrophic Forgetting (CF) means models forgetting previously acquired knowledge when learning new data.
This paper takes the first step to reveal the direct link between the flatness of the model loss landscape and the extent of CF in the field of large language models.
Experiments on three widely-used fine-tuning datasets, spanning different model scales, demonstrate the effectiveness of our method in alleviating CF.
arXiv Detail & Related papers (2024-06-07T11:09:13Z) - Generative AI for Low-Carbon Artificial Intelligence of Things with Large Language Models [67.0243099823109]
Generative AI (GAI) holds immense potential to reduce carbon emissions of Artificial Intelligence of Things (AIoT)
In this article, we explore the potential of GAI for carbon emissions reduction and propose a novel GAI-enabled solution for low-carbon AIoT.
We propose a Large Language Model (LLM)-enabled carbon emission optimization framework, in which we design pluggable LLM and Retrieval Augmented Generation (RAG) modules.
arXiv Detail & Related papers (2024-04-28T05:46:28Z) - Emissions Reporting Maturity Model: supporting cities to leverage
emissions-related processes through performance indicators and artificial
intelligence [0.0]
This work proposes an Emissions Reporting Maturity Model (ERMM) for examining, clustering, and analysing data from emissions reporting initiatives.
The PIDP supports the preparation of the data from emissions-related databases, the classification of the data according to similarities highlighted by different clustering techniques, and the identification of performance indicator candidates.
arXiv Detail & Related papers (2023-12-08T17:51:57Z) - TransCORALNet: A Two-Stream Transformer CORAL Networks for Supply Chain
Credit Assessment Cold Start [5.0299791897740675]
The model aims to provide accurate credit assessment prediction for new supply chain borrowers with limited historical data.
The proposed model addresses four significant supply chain credit assessment challenges: domain shift, cold start, imbalanced-class and interpretability.
arXiv Detail & Related papers (2023-11-30T17:47:02Z) - Large Scale Masked Autoencoding for Reducing Label Requirements on SAR Data [5.235143203977019]
We apply a self-supervised pretraining scheme, masked autoencoding, to SAR amplitude data covering 8.7% of the Earth's land surface area.
We show that the use of this pretraining scheme reduces labelling requirements for the downstream tasks by more than an order of magnitude.
Our findings significantly advance climate change mitigation by facilitating the development of task and region-specific SAR models.
arXiv Detail & Related papers (2023-10-02T00:11:47Z) - Ladder-of-Thought: Using Knowledge as Steps to Elevate Stance Detection [73.31406286956535]
We introduce the Ladder-of-Thought (LoT) for the stance detection task.
LoT directs the small LMs to assimilate high-quality external knowledge, refining the intermediate rationales produced.
Our empirical evaluations underscore LoT's efficacy, marking a 16% improvement over GPT-3.5 and a 10% enhancement compared to GPT-3.5 with CoT on stance detection task.
arXiv Detail & Related papers (2023-08-31T14:31:48Z) - A comparative study of statistical and machine learning models on
near-real-time daily emissions prediction [0.0]
The rapid ascent in carbon dioxide emissions is a major cause of global warming and climate change.
This paper aims to select a suitable model to predict the near-real-time daily emissions from January 1st, 2020 to September 30st, 2022 of all sectors in China.
arXiv Detail & Related papers (2023-02-02T15:14:27Z) - Towards Inheritable Models for Open-Set Domain Adaptation [56.930641754944915]
We introduce a practical Domain Adaptation paradigm where a source-trained model is used to facilitate adaptation in the absence of the source dataset in future.
We present an objective way to quantify inheritability to enable the selection of the most suitable source model for a given target domain, even in the absence of the source data.
arXiv Detail & Related papers (2020-04-09T07:16:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.