Solar Power Prediction Using Satellite Data in Different Parts of Nepal
- URL: http://arxiv.org/abs/2406.11877v1
- Date: Sat, 8 Jun 2024 04:23:21 GMT
- Title: Solar Power Prediction Using Satellite Data in Different Parts of Nepal
- Authors: Raj Krishna Nepal, Bibek Khanal, Vibek Ghimire, Kismat Neupane, Atul Pokharel, Kshitij Niraula, Baburam Tiwari, Nawaraj Bhattarai, Khem N. Poudyal, Nawaraj Karki, Mohan B Dangi, John Biden,
- Abstract summary: The study focuses on five distinct regions in Nepal and utilizes a dataset spanning almost ten years, obtained from CERES SYN1deg and MERRA-2.
The results indicate high accuracy in predicting solar irradiance, with R-squared(R2) scores close to unity for both train and test datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Due to the unavailability of solar irradiance data for many potential sites of Nepal, the paper proposes predicting solar irradiance based on alternative meteorological parameters. The study focuses on five distinct regions in Nepal and utilizes a dataset spanning almost ten years, obtained from CERES SYN1deg and MERRA-2. Machine learning models such as Random Forest, XGBoost, K-Nearest Neighbors, and deep learning models like LSTM and ANN-MLP are employed and evaluated for their performance. The results indicate high accuracy in predicting solar irradiance, with R-squared(R2) scores close to unity for both train and test datasets. The impact of parameter integration on model performance is analyzed, revealing the significance of various parameters in enhancing predictive accuracy. Each model demonstrates strong performance across all parameters, consistently achieving MAE values below 6, RMSE values under 10, MBE within |2|, and nearly unity R2 values. Upon removal of various solar parameters such as "Solar_Irradiance_Clear_Sky", "UVA", etc. from the datasets, the model's performance is significantly affected. This exclusion leads to considerable increases in MAE, reaching up to 82, RMSE up to 135, and MBE up to |7|. Among the models, KNN displays the weakest performance, with an R2 of 0.7582546. Conversely, ANN exhibits the strongest performance, boasting an R2 value of 0.9245877. Hence, the study concludes that Artificial Neural Network (ANN) performs exceptionally well, showcasing its versatility even under sparse data parameter conditions.
Related papers
- Solar synthetic imaging: Introducing denoising diffusion probabilistic models on SDO/AIA data [0.0]
This study proposes using generative deep learning models, specifically a Denoising Diffusion Probabilistic Model (DDPM), to create synthetic images of solar phenomena.
By employing a dataset from the AIA instrument aboard the SDO spacecraft, we aim to address the data scarcity issue.
The DDPM's performance is evaluated using cluster metrics, Frechet Inception Distance (FID), and F1-score, showcasing promising results in generating realistic solar imagery.
arXiv Detail & Related papers (2024-04-03T08:18:45Z) - Solar Radiation Prediction in the UTEQ based on Machine Learning Models [0.0]
The data was obtained from a pyranometer at the Central Campus of the State Technical University of Quevedo (UTEQ)
Different machine learning algorithms were compared using the evaluation metrics Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and the Coefficient of Determination ($R2$)
The study revealed that Gradient Boosting Regressor exhibited superior performance, closely followed by the Random Forest Regressor.
arXiv Detail & Related papers (2023-12-29T15:54:45Z) - A Hybrid Deep Learning-based Approach for Optimal Genotype by
Environment Selection [8.084449311613517]
We used a dataset comprising 93,028 training records to forecast yields for 10,337 test records, covering 159 locations over 13 years (2003-2015)
This dataset included details on 5,838 distinct genotypes and daily weather data for a 214-day growing season, enabling comprehensive analysis.
We developed two novel convolutional neural network (CNN) architectures: the CNN-DNN model, combining CNN and fully-connected networks, and the CNN-LSTM-DNN model, with an added LSTM layer for weather variables.
arXiv Detail & Related papers (2023-09-22T17:31:47Z) - Deep Learning for Day Forecasts from Sparse Observations [60.041805328514876]
Deep neural networks offer an alternative paradigm for modeling weather conditions.
MetNet-3 learns from both dense and sparse data sensors and makes predictions up to 24 hours ahead for precipitation, wind, temperature and dew point.
MetNet-3 has a high temporal and spatial resolution, respectively, up to 2 minutes and 1 km as well as a low operational latency.
arXiv Detail & Related papers (2023-06-06T07:07:54Z) - Batch-Ensemble Stochastic Neural Networks for Out-of-Distribution
Detection [55.028065567756066]
Out-of-distribution (OOD) detection has recently received much attention from the machine learning community due to its importance in deploying machine learning models in real-world applications.
In this paper we propose an uncertainty quantification approach by modelling the distribution of features.
We incorporate an efficient ensemble mechanism, namely batch-ensemble, to construct the batch-ensemble neural networks (BE-SNNs) and overcome the feature collapse problem.
We show that BE-SNNs yield superior performance on several OOD benchmarks, such as the Two-Moons dataset, the FashionMNIST vs MNIST dataset, FashionM
arXiv Detail & Related papers (2022-06-26T16:00:22Z) - Machine learning model to predict solar radiation, based on the
integration of meteorological data and data obtained from satellite images [0.0]
Knowing the behavior of solar radiation at a geographic location is essential for the use of energy from the sun.
Images obtained from the GOES-13 satellite were used, from which variables were extracted that could be integrated into datasets.
The performance of 5 machine learning algorithms in predicting solar radiation was evaluated.
arXiv Detail & Related papers (2022-04-08T22:17:19Z) - Deep Learning Based Cloud Cover Parameterization for ICON [55.49957005291674]
We train NN based cloud cover parameterizations with coarse-grained data based on realistic regional and global ICON simulations.
Globally trained NNs can reproduce sub-grid scale cloud cover of the regional simulation.
We identify an overemphasis on specific humidity and cloud ice as the reason why our column-based NN cannot perfectly generalize from the global to the regional coarse-grained data.
arXiv Detail & Related papers (2021-12-21T16:10:45Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Comparing Test Sets with Item Response Theory [53.755064720563]
We evaluate 29 datasets using predictions from 18 pretrained Transformer models on individual test examples.
We find that Quoref, HellaSwag, and MC-TACO are best suited for distinguishing among state-of-the-art models.
We also observe span selection task format, which is used for QA datasets like QAMR or SQuAD2.0, is effective in differentiating between strong and weak models.
arXiv Detail & Related papers (2021-06-01T22:33:53Z) - Modelling of daily reference evapotranspiration using deep neural
network in different climates [0.0]
This study investigates the performance of artificial neural network (ANN) and deep neural network (DNN) models for estimating daily ET o.
The best performance has been observed with the proposed model of DNN with SeLU activation function (P-DNN-SeLU) in Aksaray.
arXiv Detail & Related papers (2020-06-02T16:39:47Z) - Towards a Competitive End-to-End Speech Recognition for CHiME-6 Dinner
Party Transcription [73.66530509749305]
In this paper, we argue that, even in difficult cases, some end-to-end approaches show performance close to the hybrid baseline.
We experimentally compare and analyze CTC-Attention versus RNN-Transducer approaches along with RNN versus Transformer architectures.
Our best end-to-end model based on RNN-Transducer, together with improved beam search, reaches quality by only 3.8% WER abs. worse than the LF-MMI TDNN-F CHiME-6 Challenge baseline.
arXiv Detail & Related papers (2020-04-22T19:08:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.