Sky-image-based solar forecasting using deep learning with
multi-location data: training models locally, globally or via transfer
learning?
- URL: http://arxiv.org/abs/2211.02108v1
- Date: Thu, 3 Nov 2022 19:25:28 GMT
- Title: Sky-image-based solar forecasting using deep learning with
multi-location data: training models locally, globally or via transfer
learning?
- Authors: Yuhao Nie, Quentin Paletta, Andea Scotta, Luis Martin Pomares,
Guillaume Arbod, Sgouris Sgouridis, Joan Lasenby, Adam Brandt
- Abstract summary: One of the biggest challenges for training deep learning models is the availability of labeled datasets.
With more and more sky image datasets open sourced in recent years, the development of accurate and reliable solar forecasting methods has seen a huge growth in potential.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Solar forecasting from ground-based sky images using deep learning models has
shown great promise in reducing the uncertainty in solar power generation. One
of the biggest challenges for training deep learning models is the availability
of labeled datasets. With more and more sky image datasets open sourced in
recent years, the development of accurate and reliable solar forecasting
methods has seen a huge growth in potential. In this study, we explore three
different training strategies for deep-learning-based solar forecasting models
by leveraging three heterogeneous datasets collected around the world with
drastically different climate patterns. Specifically, we compare the
performance of models trained individually based on local datasets (local
models) and models trained jointly based on the fusion of multiple datasets
from different locations (global models), and we further examine the knowledge
transfer from pre-trained solar forecasting models to a new dataset of interest
(transfer learning models). The results suggest that the local models work well
when deployed locally, but significant errors are observed for the scale of the
prediction when applied offsite. The global model can adapt well to individual
locations, while the possible increase in training efforts need to be taken
into account. Pre-training models on a large and diversified source dataset and
transferring to a local target dataset generally achieves superior performance
over the other two training strategies. Transfer learning brings the most
benefits when there are limited local data. With 80% less training data, it can
achieve 1% improvement over the local baseline model trained using the entire
dataset. Therefore, we call on the efforts from the solar forecasting community
to contribute to a global dataset containing a massive amount of imagery and
displaying diversified samples with a range of sky conditions.
Related papers
- SolNet: Open-source deep learning models for photovoltaic power forecasting across the globe [0.0]
SolNet is a novel, general-purpose, multivariate solar power forecaster.
We show that SolNet improves forecasting performance over data-scarce settings.
We provide guidelines and considerations for transfer learning practitioners.
arXiv Detail & Related papers (2024-05-23T12:00:35Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Fantastic Gains and Where to Find Them: On the Existence and Prospect of
General Knowledge Transfer between Any Pretrained Model [74.62272538148245]
We show that for arbitrary pairings of pretrained models, one model extracts significant data context unavailable in the other.
We investigate if it is possible to transfer such "complementary" knowledge from one model to another without performance degradation.
arXiv Detail & Related papers (2023-10-26T17:59:46Z) - A Comparative Study on Generative Models for High Resolution Solar
Observation Imaging [59.372588316558826]
This work investigates capabilities of current state-of-the-art generative models to accurately capture the data distribution behind observed solar activity states.
Using distributed training on supercomputers, we are able to train generative models for up to 1024x1024 resolution that produce high quality samples indistinguishable to human experts.
arXiv Detail & Related papers (2023-04-14T14:40:32Z) - Local-Global Methods for Generalised Solar Irradiance Forecasting [1.4452289368758378]
We show it is possible to create models capable of accurately forecasting solar irradiance at new locations.
This could facilitate use planning and optimisation for both newly deployed solar farms and domestic installations.
arXiv Detail & Related papers (2023-03-10T16:13:35Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - Model Selection, Adaptation, and Combination for Deep Transfer Learning
through Neural Networks in Renewable Energies [5.953831950062808]
We conduct the first thorough experiment for model selection and adaptation for transfer learning in renewable power forecast.
We adopt models based on data from different seasons and limit the amount of training data.
We show how combining multiple models through ensembles can significantly improve the model selection and adaptation approach.
arXiv Detail & Related papers (2022-04-28T05:34:50Z) - Data Selection for Efficient Model Update in Federated Learning [0.07614628596146598]
We propose to reduce the amount of local data that is needed to train a global model.
We do this by splitting the model into a lower part for generic feature extraction and an upper part that is more sensitive to the characteristics of the local data.
Our experiments show that less than 1% of the local data can transfer the characteristics of the client data to the global model.
arXiv Detail & Related papers (2021-11-05T14:07:06Z) - Dataset Cartography: Mapping and Diagnosing Datasets with Training
Dynamics [118.75207687144817]
We introduce Data Maps, a model-based tool to characterize and diagnose datasets.
We leverage a largely ignored source of information: the behavior of the model on individual instances during training.
Our results indicate that a shift in focus from quantity to quality of data could lead to robust models and improved out-of-distribution generalization.
arXiv Detail & Related papers (2020-09-22T20:19:41Z) - Think Locally, Act Globally: Federated Learning with Local and Global
Representations [92.68484710504666]
Federated learning is a method of training models on private data distributed over multiple devices.
We propose a new federated learning algorithm that jointly learns compact local representations on each device.
We also evaluate on the task of personalized mood prediction from real-world mobile data where privacy is key.
arXiv Detail & Related papers (2020-01-06T12:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.