Clustering-based Multitasking Deep Neural Network for Solar Photovoltaics Power Generation Prediction
- URL: http://arxiv.org/abs/2405.05989v2
- Date: Tue, 14 May 2024 00:39:43 GMT
- Title: Clustering-based Multitasking Deep Neural Network for Solar Photovoltaics Power Generation Prediction
- Authors: Hui Song, Zheng Miao, Ali Babalhavaeji, Saman Mehrnia, Mahdi Jalili, Xinghuo Yu,
- Abstract summary: We propose a multitasking deep neural network (CM-DNN) framework for PV power generation prediction.
For each type, a deep neural network (DNN) is employed and trained until the accuracy cannot be improved.
For a specified customer type, inter-model knowledge transfer is conducted to enhance its training accuracy.
The proposed CM-DNN is tested on a real-world PV power generation dataset.
- Score: 16.263501526929975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasing installation of Photovoltaics (PV) cells leads to more generation of renewable energy sources (RES), but results in increased uncertainties of energy scheduling. Predicting PV power generation is important for energy management and dispatch optimization in smart grid. However, the PV power generation data is often collected across different types of customers (e.g., residential, agricultural, industrial, and commercial) while the customer information is always de-identified. This often results in a forecasting model trained with all PV power generation data, allowing the predictor to learn various patterns through intra-model self-learning, instead of constructing a separate predictor for each customer type. In this paper, we propose a clustering-based multitasking deep neural network (CM-DNN) framework for PV power generation prediction. K-means is applied to cluster the data into different customer types. For each type, a deep neural network (DNN) is employed and trained until the accuracy cannot be improved. Subsequently, for a specified customer type (i.e., the target task), inter-model knowledge transfer is conducted to enhance its training accuracy. During this process, source task selection is designed to choose the optimal subset of tasks (excluding the target customer), and each selected source task uses a coefficient to determine the amount of DNN model knowledge (weights and biases) transferred to the aimed prediction task. The proposed CM-DNN is tested on a real-world PV power generation dataset and its superiority is demonstrated by comparing the prediction performance on training the dataset with a single model without clustering.
Related papers
- Adapt-$\infty$: Scalable Lifelong Multimodal Instruction Tuning via Dynamic Data Selection [89.42023974249122]
Adapt-$infty$ is a new multi-way and adaptive data selection approach for Lifelong Instruction Tuning.
We construct pseudo-skill clusters by grouping gradient-based sample vectors.
We select the best-performing data selector for each skill cluster from a pool of selector experts.
arXiv Detail & Related papers (2024-10-14T15:48:09Z) - Deep Convolutional Neural Networks for Short-Term Multi-Energy Demand
Prediction of Integrated Energy Systems [0.0]
This paper develops six novel prediction models based on Convolutional Neural Networks (CNNs) for forecasting multi-energy power consumptions.
The models are applied in a comprehensive manner on a novel integrated electrical, heat and gas network system.
arXiv Detail & Related papers (2023-12-24T14:56:23Z) - Secure short-term load forecasting for smart grids with
transformer-based federated learning [0.0]
Electricity load forecasting is an essential task within smart grids to assist demand and supply balance.
Fine-grained load profiles can expose users' electricity consumption behaviors, which raises privacy and security concerns.
This paper presents a novel transformer-based deep learning approach with federated learning for short-term electricity load prediction.
arXiv Detail & Related papers (2023-10-26T15:27:55Z) - FedWOA: A Federated Learning Model that uses the Whale Optimization
Algorithm for Renewable Energy Prediction [0.0]
This paper introduces FedWOA, a novel federated learning model that aggregate global prediction models from the weights of local neural network models trained on prosumer energy data.
The evaluation results on prosumers energy data have shown that FedWOA can effectively enhance the accuracy of energy prediction models accuracy by 25% for MSE and 16% for MAE compared to FedAVG.
arXiv Detail & Related papers (2023-09-19T05:44:18Z) - Forecasting Intraday Power Output by a Set of PV Systems using Recurrent Neural Networks and Physical Covariates [0.0]
Accurate forecasts of the power output by PhotoVoltaic (PV) systems are critical to improve the operation of energy distribution grids.
We describe a neural autoregressive model that aims to perform such intraday forecasts.
arXiv Detail & Related papers (2023-03-15T09:03:58Z) - Federated Learning based Energy Demand Prediction with Clustered
Aggregation [14.631304404306778]
Energy usage information collected by the clients' smart homes can be used to train a deep neural network to predict the future energy demand.
In this paper, we propose a recurrent neural network based energy demand predictor trained with federated learning on clustered clients to take advantage of distributed data.
arXiv Detail & Related papers (2022-10-28T02:46:59Z) - Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling
and Design [68.1682448368636]
We present a supervised pretraining approach to learn circuit representations that can be adapted to new unseen topologies or unseen prediction tasks.
To cope with the variable topological structure of different circuits we describe each circuit as a graph and use graph neural networks (GNNs) to learn node embeddings.
We show that pretraining GNNs on prediction of output node voltages can encourage learning representations that can be adapted to new unseen topologies or prediction of new circuit level properties.
arXiv Detail & Related papers (2022-03-29T21:18:47Z) - Learning Generative Vision Transformer with Energy-Based Latent Space
for Saliency Prediction [51.80191416661064]
We propose a novel vision transformer with latent variables following an informative energy-based prior for salient object detection.
Both the vision transformer network and the energy-based prior model are jointly trained via Markov chain Monte Carlo-based maximum likelihood estimation.
With the generative vision transformer, we can easily obtain a pixel-wise uncertainty map from an image, which indicates the model confidence in predicting saliency from the image.
arXiv Detail & Related papers (2021-12-27T06:04:33Z) - Learning to Continuously Optimize Wireless Resource in a Dynamic
Environment: A Bilevel Optimization Perspective [52.497514255040514]
This work develops a new approach that enables data-driven methods to continuously learn and optimize resource allocation strategies in a dynamic environment.
We propose to build the notion of continual learning into wireless system design, so that the learning model can incrementally adapt to the new episodes.
Our design is based on a novel bilevel optimization formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2021-05-03T07:23:39Z) - Trajectory-wise Multiple Choice Learning for Dynamics Generalization in
Reinforcement Learning [137.39196753245105]
We present a new model-based reinforcement learning algorithm that learns a multi-headed dynamics model for dynamics generalization.
We incorporate context learning, which encodes dynamics-specific information from past experiences into the context latent vector.
Our method exhibits superior zero-shot generalization performance across a variety of control tasks, compared to state-of-the-art RL methods.
arXiv Detail & Related papers (2020-10-26T03:20:42Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.