Just-In-Time Learning for Operational Risk Assessment in Power Grids
- URL: http://arxiv.org/abs/2209.12762v1
- Date: Mon, 26 Sep 2022 15:11:27 GMT
- Title: Just-In-Time Learning for Operational Risk Assessment in Power Grids
- Authors: Oliver Stover, Pranav Karve, Sankaran Mahadevan, Wenbo Chen, Haoruo
Zhao, Mathieu Tanneau, Pascal Van Hentenryck
- Abstract summary: In a grid with a significant share of renewable generation, operators will need additional tools to evaluate the operational risk.
This paper proposes a Just-In-Time Risk Assessment Learning Framework (JITRALF) as an alternative.
JITRALF trains risk surrogates, one for each hour in the day, using Machine Learning (ML) to predict the quantities needed to estimate risk.
- Score: 12.939739997360016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In a grid with a significant share of renewable generation, operators will
need additional tools to evaluate the operational risk due to the increased
volatility in load and generation. The computational requirements of the
forward uncertainty propagation problem, which must solve numerous
security-constrained economic dispatch (SCED) optimizations, is a major barrier
for such real-time risk assessment. This paper proposes a Just-In-Time Risk
Assessment Learning Framework (JITRALF) as an alternative. JITRALF trains risk
surrogates, one for each hour in the day, using Machine Learning (ML) to
predict the quantities needed to estimate risk, without explicitly solving the
SCED problem. This significantly reduces the computational burden of the
forward uncertainty propagation and allows for fast, real-time risk estimation.
The paper also proposes a novel, asymmetric loss function and shows that models
trained using the asymmetric loss perform better than those using symmetric
loss functions. JITRALF is evaluated on the French transmission system for
assessing the risk of insufficient operating reserves, the risk of load
shedding, and the expected operating cost.
Related papers
- Enhancing Risk Assessment in Transformers with Loss-at-Risk Functions [3.2162648244439684]
We introduce a novel loss function, the Loss-at-Risk, which incorporates Value at Risk (VaR) and Conditional Value at Risk (CVaR) into Transformer models.
This integration allows Transformer models to recognize potential extreme losses and further improves their capability to handle high-stakes financial decisions.
We conduct a series of experiments with highly volatile financial datasets to demonstrate that our Loss-at-Risk function improves the Transformers' risk prediction and management capabilities.
arXiv Detail & Related papers (2024-11-04T19:44:43Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Safe Deployment for Counterfactual Learning to Rank with Exposure-Based
Risk Minimization [63.93275508300137]
We introduce a novel risk-aware Counterfactual Learning To Rank method with theoretical guarantees for safe deployment.
Our experimental results demonstrate the efficacy of our proposed method, which is effective at avoiding initial periods of bad performance when little data is available.
arXiv Detail & Related papers (2023-04-26T15:54:23Z) - Can Perturbations Help Reduce Investment Risks? Risk-Aware Stock
Recommendation via Split Variational Adversarial Training [44.7991257631318]
We propose a novel Split Variational Adrial Training (SVAT) method for risk-aware stock recommendation.
By lowering the volatility of the stock recommendation model, SVAT effectively reduces investment risks and outperforms state-of-the-art baselines by more than 30% in terms of risk-adjusted profits.
arXiv Detail & Related papers (2023-04-20T12:10:12Z) - Risk-Averse Reinforcement Learning via Dynamic Time-Consistent Risk
Measures [10.221369785560785]
In this paper, we consider the problem of maximizing dynamic risk of a sequence of rewards in Markov Decision Processes (MDPs)
Using a convex combination of expectation and conditional value-at-risk (CVaR) as a special one-step conditional risk measure, we reformulate the risk-averse MDP as a risk-neutral counterpart with augmented action space and manipulation on the immediate rewards.
Our numerical studies show that the risk-averse setting can reduce the variance and enhance robustness of the results.
arXiv Detail & Related papers (2023-01-14T21:43:18Z) - Estimating value at risk: LSTM vs. GARCH [0.0]
We propose a novel value-at-risk estimator using a long short-term memory (LSTM) neural network.
Our results indicate that even for a relatively short time series, the LSTM could be used to refine or monitor risk estimation processes.
We evaluate the estimator on both simulated and market data with a focus on heteroscedasticity, finding that LSTM exhibits a similar performance to GARCH estimators on simulated data.
arXiv Detail & Related papers (2022-07-21T15:26:07Z) - A Survey of Risk-Aware Multi-Armed Bandits [84.67376599822569]
We review various risk measures of interest, and comment on their properties.
We consider algorithms for the regret minimization setting, where the exploration-exploitation trade-off manifests.
We conclude by commenting on persisting challenges and fertile areas for future research.
arXiv Detail & Related papers (2022-05-12T02:20:34Z) - Efficient Risk-Averse Reinforcement Learning [79.61412643761034]
In risk-averse reinforcement learning (RL), the goal is to optimize some risk measure of the returns.
We prove that under certain conditions this inevitably leads to a local-optimum barrier, and propose a soft risk mechanism to bypass it.
We demonstrate improved risk aversion in maze navigation, autonomous driving, and resource allocation benchmarks.
arXiv Detail & Related papers (2022-05-10T19:40:52Z) - Detecting and Mitigating Test-time Failure Risks via Model-agnostic
Uncertainty Learning [30.86992077157326]
This paper introduces Risk Advisor, a novel post-hoc meta-learner for estimating failure risks and predictive uncertainties of any already-trained black-box classification model.
In addition to providing a risk score, the Risk Advisor decomposes the uncertainty estimates into aleatoric and epistemic uncertainty components.
Experiments on various families of black-box classification models and on real-world and synthetic datasets show that the Risk Advisor reliably predicts deployment-time failure risks.
arXiv Detail & Related papers (2021-09-09T17:23:31Z) - Clinical Risk Prediction with Temporal Probabilistic Asymmetric
Multi-Task Learning [80.66108902283388]
Multi-task learning methods should be used with caution for safety-critical applications, such as clinical risk prediction.
Existing asymmetric multi-task learning methods tackle this negative transfer problem by performing knowledge transfer from tasks with low loss to tasks with high loss.
We propose a novel temporal asymmetric multi-task learning model that performs knowledge transfer from certain tasks/timesteps to relevant uncertain tasks, based on feature-level uncertainty.
arXiv Detail & Related papers (2020-06-23T06:01:36Z) - Learning Bounds for Risk-sensitive Learning [86.50262971918276]
In risk-sensitive learning, one aims to find a hypothesis that minimizes a risk-averse (or risk-seeking) measure of loss.
We study the generalization properties of risk-sensitive learning schemes whose optimand is described via optimized certainty equivalents.
arXiv Detail & Related papers (2020-06-15T05:25:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.