Forte: An Interactive Visual Analytic Tool for Trust-Augmented Net Load
Forecasting
- URL: http://arxiv.org/abs/2311.06413v1
- Date: Fri, 10 Nov 2023 22:15:11 GMT
- Title: Forte: An Interactive Visual Analytic Tool for Trust-Augmented Net Load
Forecasting
- Authors: Kaustav Bhattacharjee, Soumya Kundu, Indrasis Chakraborty and Aritra
Dasgupta
- Abstract summary: We present Forte, a visual analytics-based application to explore deep probabilistic net load forecasting models across various input variables.
We discuss observations made using Forte and demonstrate the effectiveness of visualization techniques to provide valuable insights into the correlation between weather inputs and net load forecasts.
- Score: 0.6144680854063939
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Accurate net load forecasting is vital for energy planning, aiding decisions
on trade and load distribution. However, assessing the performance of
forecasting models across diverse input variables, like temperature and
humidity, remains challenging, particularly for eliciting a high degree of
trust in the model outcomes. In this context, there is a growing need for
data-driven technological interventions to aid scientists in comprehending how
models react to both noisy and clean input variables, thus shedding light on
complex behaviors and fostering confidence in the outcomes. In this paper, we
present Forte, a visual analytics-based application to explore deep
probabilistic net load forecasting models across various input variables and
understand the error rates for different scenarios. With carefully designed
visual interventions, this web-based interface empowers scientists to derive
insights about model performance by simulating diverse scenarios, facilitating
an informed decision-making process. We discuss observations made using Forte
and demonstrate the effectiveness of visualization techniques to provide
valuable insights into the correlation between weather inputs and net load
forecasts, ultimately advancing grid capabilities by improving trust in
forecasting models.
Related papers
- Uncertainty Quantification for Transformer Models for Dark-Pattern Detection [0.21427777919040417]
This study focuses on dark-pattern detection, deceptive design choices that manipulate user decisions, undermining autonomy and consent.
We propose a differential fine-tuning approach implemented at the final classification head via uncertainty quantification with transformer-based pre-trained models.
arXiv Detail & Related papers (2024-12-06T18:31:51Z) - On the Fairness, Diversity and Reliability of Text-to-Image Generative Models [49.60774626839712]
multimodal generative models have sparked critical discussions on their fairness, reliability, and potential for misuse.
We propose an evaluation framework designed to assess model reliability through their responses to perturbations in the embedding space.
Our method lays the groundwork for detecting unreliable, bias-injected models and retrieval of bias provenance.
arXiv Detail & Related papers (2024-11-21T09:46:55Z) - Traj-Explainer: An Explainable and Robust Multi-modal Trajectory Prediction Approach [12.60529039445456]
Navigating complex traffic environments has been significantly enhanced by advancements in intelligent technologies, enabling accurate environment perception and trajectory prediction for automated vehicles.
Existing research often neglects the consideration of the joint reasoning of scenario agents and lacks interpretability in trajectory prediction models.
An explainability-oriented trajectory prediction model is designed in this work, named Explainable Diffusion Conditional based Multimodal Trajectory Prediction Traj-Explainer.
arXiv Detail & Related papers (2024-10-22T08:17:33Z) - Future-Guided Learning: A Predictive Approach To Enhance Time-Series Forecasting [4.866362841501992]
We introduce Future-Guided Learning, an approach that enhances time-series event forecasting.
Our approach involves two models: a detection model that analyzes future data to identify critical events and a forecasting model that predicts these events based on present data.
When discrepancies arise between the forecasting and detection models, the forecasting model undergoes more substantial updates.
arXiv Detail & Related papers (2024-10-19T21:22:55Z) - Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - Who should I trust? A Visual Analytics Approach for Comparing Net Load Forecasting Models [0.562479170374811]
This paper introduces a visual analytics-based application designed to compare the performance of deep-learning-based net load forecasting models with other models for probabilistic net load forecasting.
The application employs carefully selected visual analytic interventions, enabling users to discern differences in model performance across different solar penetration levels, dataset resolutions, and hours of the day over multiple months.
arXiv Detail & Related papers (2024-07-31T02:57:21Z) - Performative Time-Series Forecasting [71.18553214204978]
We formalize performative time-series forecasting (PeTS) from a machine-learning perspective.
We propose a novel approach, Feature Performative-Shifting (FPS), which leverages the concept of delayed response to anticipate distribution shifts.
We conduct comprehensive experiments using multiple time-series models on COVID-19 and traffic forecasting tasks.
arXiv Detail & Related papers (2023-10-09T18:34:29Z) - Study of Distractors in Neural Models of Code [4.043200001974071]
Finding important features that contribute to the prediction of neural models is an active area of research in explainable AI.
In this work, we present an inverse perspective of distractor features: features that cast doubt about the prediction by affecting the model's confidence in its prediction.
Our experiments across various tasks, models, and datasets of code reveal that the removal of tokens can have a significant impact on the confidence of models in their predictions.
arXiv Detail & Related papers (2023-03-03T06:54:01Z) - Why Did This Model Forecast This Future? Closed-Form Temporal Saliency
Towards Causal Explanations of Probabilistic Forecasts [20.442850522575213]
We build upon a general definition of information-theoretic saliency grounded in human perception.
We propose to express the saliency of an observed window in terms of the differential entropy of the resulting predicted future distribution.
We empirically demonstrate how our framework can recover salient observed windows from head pose features for the sample task of speaking-turn forecasting.
arXiv Detail & Related papers (2022-06-01T18:00:04Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Probabilistic electric load forecasting through Bayesian Mixture Density
Networks [70.50488907591463]
Probabilistic load forecasting (PLF) is a key component in the extended tool-chain required for efficient management of smart energy grids.
We propose a novel PLF approach, framed on Bayesian Mixture Density Networks.
To achieve reliable and computationally scalable estimators of the posterior distributions, both Mean Field variational inference and deep ensembles are integrated.
arXiv Detail & Related papers (2020-12-23T16:21:34Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.