Neural network interpretability for forecasting of aggregated renewable
generation
- URL: http://arxiv.org/abs/2106.10476v2
- Date: Tue, 22 Jun 2021 10:00:31 GMT
- Title: Neural network interpretability for forecasting of aggregated renewable
generation
- Authors: Yucun Lu, Ilgiz Murzakhanov, Spyros Chatzivasileiadis
- Abstract summary: There is a need for aggregated prosumers to predict solar power generation.
This paper presents two interpretable neural networks to solve the problem.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid growth of renewable energy, lots of small photovoltaic (PV)
prosumers emerge. Due to the uncertainty of solar power generation, there is a
need for aggregated prosumers to predict solar power generation and whether
solar power generation will be larger than load. This paper presents two
interpretable neural networks to solve the problem: one binary classification
neural network and one regression neural network. The neural networks are built
using TensorFlow. The global feature importance and local feature contributions
are examined by three gradient-based methods: Integrated Gradients, Expected
Gradients, and DeepLIFT. Moreover, we detect abnormal cases when predictions
might fail by estimating the prediction uncertainty using Bayesian neural
networks. Neural networks, which are interpreted by gradient-based methods and
complemented with uncertainty estimation, provide robust and explainable
forecasting for decision-makers.
Related papers
- An Interpretable Power System Transient Stability Assessment Method with Expert Guiding Neural-Regression-Tree [12.964139269555277]
An interpretable power system Transient Stability Assessment method with Expert guiding Neural-Regression-Tree (TSA-ENRT) is proposed.
TSA-ENRT utilizes an expert guiding nonlinear regression tree to approximate the neural network prediction and the neural network can be explained by the interpretive rules generated by the tree model.
Extensive experiments indicate the interpretive rules generated by the proposed TSA-ENRT are highly consistent with the neural network prediction and more agreed with human expert cognition.
arXiv Detail & Related papers (2024-04-03T08:22:41Z) - On the Convergence of Locally Adaptive and Scalable Diffusion-Based Sampling Methods for Deep Bayesian Neural Network Posteriors [2.3265565167163906]
Bayesian neural networks are a promising approach for modeling uncertainties in deep neural networks.
generating samples from the posterior distribution of neural networks is a major challenge.
One advance in that direction would be the incorporation of adaptive step sizes into Monte Carlo Markov chain sampling algorithms.
In this paper, we demonstrate that these methods can have a substantial bias in the distribution they sample, even in the limit of vanishing step sizes and at full batch size.
arXiv Detail & Related papers (2024-03-13T15:21:14Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Benign Overfitting for Two-layer ReLU Convolutional Neural Networks [60.19739010031304]
We establish algorithm-dependent risk bounds for learning two-layer ReLU convolutional neural networks with label-flipping noise.
We show that, under mild conditions, the neural network trained by gradient descent can achieve near-zero training loss and Bayes optimal test risk.
arXiv Detail & Related papers (2023-03-07T18:59:38Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - Hybrid machine-learned homogenization: Bayesian data mining and
convolutional neural networks [0.0]
This study aims to improve the machine learned prediction by developing novel feature descriptors.
The iterative development of feature descriptors resulted in 37 novel features, being able to reduce the prediction error by roughly one third.
A combination of the feature based approach and the convolutional neural network leads to a hybrid neural network.
arXiv Detail & Related papers (2023-02-24T09:59:29Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Optimal Learning Rates of Deep Convolutional Neural Networks: Additive
Ridge Functions [19.762318115851617]
We consider the mean squared error analysis for deep convolutional neural networks.
We show that, for additive ridge functions, convolutional neural networks followed by one fully connected layer with ReLU activation functions can reach optimal mini-max rates.
arXiv Detail & Related papers (2022-02-24T14:22:32Z) - Bayesian Neural Networks: Essentials [0.6091702876917281]
It is nontrivial to understand, design and train Bayesian neural networks due to their complexities.
Deep neural networks makes it redundant, and costly, to account for uncertainty for a large number of successive layers.
Hybrid Bayesian neural networks, which use few probabilistic layers judicially positioned in the networks, provide a practical solution.
arXiv Detail & Related papers (2021-06-22T13:54:17Z) - Neural Networks with Recurrent Generative Feedback [61.90658210112138]
We instantiate this design on convolutional neural networks (CNNs)
In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.
arXiv Detail & Related papers (2020-07-17T19:32:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.