A Cherry-Picking Approach to Large Load Shaping for More Effective Carbon Reduction
- URL: http://arxiv.org/abs/2601.17990v1
- Date: Sun, 25 Jan 2026 20:43:51 GMT
- Title: A Cherry-Picking Approach to Large Load Shaping for More Effective Carbon Reduction
- Authors: Bokan Chen, Raiden Hasegawa, Adriaan Hilbers, Ross Koningstein, Ana Radovanović, Utkarsh Shah, Gabriela Volpato, Mohamed Ahmed, Tim Cary, Rod Frowd,
- Abstract summary: This study uses a series of calibrated ERCOT day-ahead direct current optimal power flow (DC-OPF) simulations for counterfactual analysis of load shaping strategies on grid CO2 emissions and cost of electricity.<n>In terms of annual grid level CO2 emissions reductions, LMP-based shaping outperforms other common strategies, but can be significantly improved upon.
- Score: 1.392465349422406
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Shaping multi-megawatt loads, such as data centers, impacts generator dispatch on the electric grid, which in turn affects system CO2 emissions and energy cost. Substantiating the effectiveness of prevalent load shaping strategies, such as those based on grid-level average carbon intensity, locational marginal price, or marginal emissions, is challenging due to the lack of detailed counterfactual data required for accurate attribution. This study uses a series of calibrated granular ERCOT day-ahead direct current optimal power flow (DC-OPF) simulations for counterfactual analysis of a broad set of load shaping strategies on grid CO2 emissions and cost of electricity. In terms of annual grid level CO2 emissions reductions, LMP-based shaping outperforms other common strategies, but can be significantly improved upon. Examining the performance of practicable strategies under different grid conditions motivates a more effective load shaping approach: one that "cherry-picks" a daily strategy based on observable grid signals and historical data. The cherry-picking approach to power load shaping is applicable to any large flexible consumer on the electricity grid, such as data centers, distributed energy resources and Virtual Power Plants (VPPs).
Related papers
- Data-Efficient RLVR via Off-Policy Influence Guidance [84.60336960383867]
This work proposes a theoretically-grounded approach using influence functions to estimate the contribution of each data point to the learning objective.<n>We develop textbfCurriculum textbfRL with textbfOff-textbfPolicy textInfluence guidance (textbfCROPI), a multi-stage RL framework that iteratively selects the most influential data for the current policy.
arXiv Detail & Related papers (2025-10-30T13:40:52Z) - Diffusion-Modeled Reinforcement Learning for Carbon and Risk-Aware Microgrid Optimization [48.70916202664808]
DiffCarl is a diffusion-modeled carbon- and risk-aware reinforcement learning algorithm for intelligent operation of multi-microgrid systems.<n>It outperforms classic algorithms and state-of-the-art DRL solutions, with 2.3-30.1% lower operational cost.<n>It also achieves 28.7% lower carbon emissions than those of its carbon-unaware variant and reduces performance variability.
arXiv Detail & Related papers (2025-07-22T03:27:07Z) - Control of Renewable Energy Communities using AI and Real-World Data [0.0]
This paper introduces a framework designed explicitly to handle these complexities and bridge the simulation to-reality gap.<n>It incorporates EnergAIze, a MADD-based multi-agent control strategy, and specifically addresses challenges related to real-world data collection, system integration, and user behavior modeling.
arXiv Detail & Related papers (2025-05-22T22:20:09Z) - Joint Resource Management for Energy-efficient UAV-assisted SWIPT-MEC: A Deep Reinforcement Learning Approach [50.52139512096988]
6G Internet of Things (IoT) networks face challenges in remote areas and disaster scenarios where ground infrastructure is unavailable.<n>This paper proposes a novel aerial unmanned vehicle (UAV)-assisted computing (MEC) system enhanced by directional antennas to provide both computational and energy support for ground edge terminals.
arXiv Detail & Related papers (2025-05-06T06:46:19Z) - Towards net-zero manufacturing: carbon-aware scheduling for GHG emissions reduction [0.0]
Scope 2 emissions are the indirect emissions related to the production and consumption of grid electricity.<n>This study introduces a carbon-aware permutation flow-shop scheduling model designed to reduce scope 2 emissions.
arXiv Detail & Related papers (2025-03-03T09:06:54Z) - RL for Mitigating Cascading Failures: Targeted Exploration via Sensitivity Factors [17.351232452350967]
Electricity grid's resiliency and climate change strongly impact one another.<n>This paper introduces a physics-informed machine learning-based framework to enhance grid's resiliency.
arXiv Detail & Related papers (2024-11-27T04:34:31Z) - Centralized vs. Decentralized Multi-Agent Reinforcement Learning for Enhanced Control of Electric Vehicle Charging Networks [1.9188272016043582]
We introduce a novel approach for distributed and cooperative charging strategy using a Multi-Agent Reinforcement Learning (MARL) framework.
Our method is built upon the Deep Deterministic Policy Gradient (DDPG) algorithm for a group of EVs in a residential community.
Our results indicate that, despite higher policy variances and training complexity, the CTDE-DDPG framework significantly improves charging efficiency by reducing total variation by approximately %36 and charging cost by around %9.1 on average.
arXiv Detail & Related papers (2024-04-18T21:50:03Z) - Sustainable AIGC Workload Scheduling of Geo-Distributed Data Centers: A
Multi-Agent Reinforcement Learning Approach [48.18355658448509]
Recent breakthroughs in generative artificial intelligence have triggered a surge in demand for machine learning training, which poses significant cost burdens and environmental challenges due to its substantial energy consumption.
Scheduling training jobs among geographically distributed cloud data centers unveils the opportunity to optimize the usage of computing capacity powered by inexpensive and low-carbon energy.
We propose an algorithm based on multi-agent reinforcement learning and actor-critic methods to learn the optimal collaborative scheduling strategy through interacting with a cloud system built with real-life workload patterns, energy prices, and carbon intensities.
arXiv Detail & Related papers (2023-04-17T02:12:30Z) - Transfer Deep Reinforcement Learning-based Large-scale V2G Continuous
Charging Coordination with Renewable Energy Sources [5.99526159525785]
Vehicle-to-grid (V2G) technique and large-scale scheduling algorithms have been developed to achieve a high level of renewable energy and power grid stability.
This paper proposes a deep reinforcement learning (DRL) method for the continuous charging/discharging coordination strategy.
arXiv Detail & Related papers (2022-10-13T13:21:55Z) - Data-Driven Stochastic AC-OPF using Gaussian Processes [54.94701604030199]
Integrating a significant amount of renewables into a power grid is probably the most a way to reduce carbon emissions from power grids slow down climate change.
This paper presents an alternative data-driven approach based on the AC power flow equations that can incorporate uncertainty inputs.
The GP approach learns a simple yet non-constrained data-driven approach to close this gap to the AC power flow equations.
arXiv Detail & Related papers (2022-07-21T23:02:35Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.