Supply chain emission estimation using large language models
- URL: http://arxiv.org/abs/2308.01741v1
- Date: Thu, 3 Aug 2023 13:06:37 GMT
- Title: Supply chain emission estimation using large language models
- Authors: Ayush Jain, Manikandan Padmanaban, Jagabondhu Hazra, Shantanu Godbole,
Kommy Weldemariam
- Abstract summary: We propose a first-of-a-kind framework that uses domain-adapted NLP foundation models to estimate Scope 3 emissions.
We compare the performance of the proposed framework with the state-of-the-art text classification models such as TF-IDF, word2Vec, and Zero shot learning.
- Score: 15.605998085195314
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large enterprises face a crucial imperative to achieve the Sustainable
Development Goals (SDGs), especially goal 13, which focuses on combating
climate change and its impacts. To mitigate the effects of climate change,
reducing enterprise Scope 3 (supply chain emissions) is vital, as it accounts
for more than 90\% of total emission inventories. However, tracking Scope 3
emissions proves challenging, as data must be collected from thousands of
upstream and downstream suppliers.To address the above mentioned challenges, we
propose a first-of-a-kind framework that uses domain-adapted NLP foundation
models to estimate Scope 3 emissions, by utilizing financial transactions as a
proxy for purchased goods and services. We compared the performance of the
proposed framework with the state-of-art text classification models such as
TF-IDF, word2Vec, and Zero shot learning. Our results show that the
domain-adapted foundation model outperforms state-of-the-art text mining
techniques and performs as well as a subject matter expert (SME). The proposed
framework could accelerate the Scope 3 estimation at Enterprise scale and will
help to take appropriate climate actions to achieve SDG 13.
Related papers
- Towards Universal Large-Scale Foundational Model for Natural Gas Demand Forecasting [12.60741035434783]
We propose the first foundation model specifically tailored for natural gas demand forecasting.
Our approach leverages contrastive learning to improve prediction accuracy in real-world scenarios.
We conducted extensive experiments using a large-scale dataset from ENN Group.
arXiv Detail & Related papers (2024-09-24T06:44:29Z) - Efficient Localized Adaptation of Neural Weather Forecasting: A Case Study in the MENA Region [62.09891513612252]
We focus on limited-area modeling and train our model specifically for localized region-level downstream tasks.
We consider the MENA region due to its unique climatic challenges, where accurate localized weather forecasting is crucial for managing water resources, agriculture and mitigating the impacts of extreme weather events.
Our study aims to validate the effectiveness of integrating parameter-efficient fine-tuning (PEFT) methodologies, specifically Low-Rank Adaptation (LoRA) and its variants, to enhance forecast accuracy, as well as training speed, computational resource utilization, and memory efficiency in weather and climate modeling for specific regions.
arXiv Detail & Related papers (2024-09-11T19:31:56Z) - MambaDS: Near-Surface Meteorological Field Downscaling with Topography Constrained Selective State Space Modeling [68.69647625472464]
Downscaling, a crucial task in meteorological forecasting, enables the reconstruction of high-resolution meteorological states for target regions.
Previous downscaling methods lacked tailored designs for meteorology and encountered structural limitations.
We propose a novel model called MambaDS, which enhances the utilization of multivariable correlations and topography information.
arXiv Detail & Related papers (2024-08-20T13:45:49Z) - Revisiting Catastrophic Forgetting in Large Language Model Tuning [79.70722658190097]
Catastrophic Forgetting (CF) means models forgetting previously acquired knowledge when learning new data.
This paper takes the first step to reveal the direct link between the flatness of the model loss landscape and the extent of CF in the field of large language models.
Experiments on three widely-used fine-tuning datasets, spanning different model scales, demonstrate the effectiveness of our method in alleviating CF.
arXiv Detail & Related papers (2024-06-07T11:09:13Z) - Generative AI for Low-Carbon Artificial Intelligence of Things with Large Language Models [67.0243099823109]
Generative AI (GAI) holds immense potential to reduce carbon emissions of Artificial Intelligence of Things (AIoT)
In this article, we explore the potential of GAI for carbon emissions reduction and propose a novel GAI-enabled solution for low-carbon AIoT.
We propose a Large Language Model (LLM)-enabled carbon emission optimization framework, in which we design pluggable LLM and Retrieval Augmented Generation (RAG) modules.
arXiv Detail & Related papers (2024-04-28T05:46:28Z) - Emissions Reporting Maturity Model: supporting cities to leverage
emissions-related processes through performance indicators and artificial
intelligence [0.0]
This work proposes an Emissions Reporting Maturity Model (ERMM) for examining, clustering, and analysing data from emissions reporting initiatives.
The PIDP supports the preparation of the data from emissions-related databases, the classification of the data according to similarities highlighted by different clustering techniques, and the identification of performance indicator candidates.
arXiv Detail & Related papers (2023-12-08T17:51:57Z) - TransCORALNet: A Two-Stream Transformer CORAL Networks for Supply Chain
Credit Assessment Cold Start [5.0299791897740675]
The model aims to provide accurate credit assessment prediction for new supply chain borrowers with limited historical data.
The proposed model addresses four significant supply chain credit assessment challenges: domain shift, cold start, imbalanced-class and interpretability.
arXiv Detail & Related papers (2023-11-30T17:47:02Z) - Large Scale Masked Autoencoding for Reducing Label Requirements on SAR Data [5.235143203977019]
We apply a self-supervised pretraining scheme, masked autoencoding, to SAR amplitude data covering 8.7% of the Earth's land surface area.
We show that the use of this pretraining scheme reduces labelling requirements for the downstream tasks by more than an order of magnitude.
Our findings significantly advance climate change mitigation by facilitating the development of task and region-specific SAR models.
arXiv Detail & Related papers (2023-10-02T00:11:47Z) - Ladder-of-Thought: Using Knowledge as Steps to Elevate Stance Detection [73.31406286956535]
We introduce the Ladder-of-Thought (LoT) for the stance detection task.
LoT directs the small LMs to assimilate high-quality external knowledge, refining the intermediate rationales produced.
Our empirical evaluations underscore LoT's efficacy, marking a 16% improvement over GPT-3.5 and a 10% enhancement compared to GPT-3.5 with CoT on stance detection task.
arXiv Detail & Related papers (2023-08-31T14:31:48Z) - A comparative study of statistical and machine learning models on
near-real-time daily emissions prediction [0.0]
The rapid ascent in carbon dioxide emissions is a major cause of global warming and climate change.
This paper aims to select a suitable model to predict the near-real-time daily emissions from January 1st, 2020 to September 30st, 2022 of all sectors in China.
arXiv Detail & Related papers (2023-02-02T15:14:27Z) - Towards Inheritable Models for Open-Set Domain Adaptation [56.930641754944915]
We introduce a practical Domain Adaptation paradigm where a source-trained model is used to facilitate adaptation in the absence of the source dataset in future.
We present an objective way to quantify inheritability to enable the selection of the most suitable source model for a given target domain, even in the absence of the source data.
arXiv Detail & Related papers (2020-04-09T07:16:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.