Cross-variable Linear Integrated ENhanced Transformer for Photovoltaic power forecasting
- URL: http://arxiv.org/abs/2406.03808v1
- Date: Thu, 6 Jun 2024 07:30:27 GMT
- Title: Cross-variable Linear Integrated ENhanced Transformer for Photovoltaic power forecasting
- Authors: Jiaxin Gao, Qinglong Cao, Yuntian Chen, Dongxiao Zhang,
- Abstract summary: PV-Client employs an ENhanced Transformer module to capture complex interactions of various features in PV systems.
PV-Client streamlines the embedding and position encoding layers by replacing the Decoder module with a projection layer.
Experimental results on three real-world PV power datasets affirm PV-Client's state-of-the-art (SOTA) performance in PV power forecasting.
- Score: 2.1799192736303783
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Photovoltaic (PV) power forecasting plays a crucial role in optimizing the operation and planning of PV systems, thereby enabling efficient energy management and grid integration. However, un certainties caused by fluctuating weather conditions and complex interactions between different variables pose significant challenges to accurate PV power forecasting. In this study, we propose PV-Client (Cross-variable Linear Integrated ENhanced Transformer for Photovoltaic power forecasting) to address these challenges and enhance PV power forecasting accuracy. PV-Client employs an ENhanced Transformer module to capture complex interactions of various features in PV systems, and utilizes a linear module to learn trend information in PV power. Diverging from conventional time series-based Transformer models that use cross-time Attention to learn dependencies between different time steps, the Enhanced Transformer module integrates cross-variable Attention to capture dependencies between PV power and weather factors. Furthermore, PV-Client streamlines the embedding and position encoding layers by replacing the Decoder module with a projection layer. Experimental results on three real-world PV power datasets affirm PV-Client's state-of-the-art (SOTA) performance in PV power forecasting. Specifically, PV-Client surpasses the second-best model GRU by 5.3% in MSE metrics and 0.9% in accuracy metrics at the Jingang Station. Similarly, PV-Client outperforms the second-best model SVR by 10.1% in MSE metrics and 0.2% in accuracy metrics at the Xinqingnian Station, and PV-Client exhibits superior performance compared to the second-best model SVR with enhancements of 3.4% in MSE metrics and 0.9% in accuracy metrics at the Hongxing Station.
Related papers
- Clustering-based Multitasking Deep Neural Network for Solar Photovoltaics Power Generation Prediction [16.263501526929975]
We propose a multitasking deep neural network (CM-DNN) framework for PV power generation prediction.
For each type, a deep neural network (DNN) is employed and trained until the accuracy cannot be improved.
For a specified customer type, inter-model knowledge transfer is conducted to enhance its training accuracy.
The proposed CM-DNN is tested on a real-world PV power generation dataset.
arXiv Detail & Related papers (2024-05-09T00:08:21Z) - VST++: Efficient and Stronger Visual Saliency Transformer [74.26078624363274]
We develop an efficient and stronger VST++ model to explore global long-range dependencies.
We evaluate our model across various transformer-based backbones on RGB, RGB-D, and RGB-T SOD benchmark datasets.
arXiv Detail & Related papers (2023-10-18T05:44:49Z) - Isomer: Isomerous Transformer for Zero-shot Video Object Segmentation [59.91357714415056]
We propose two Transformer variants: Context-Sharing Transformer (CST) and Semantic Gathering-Scattering Transformer (S GST)
CST learns the global-shared contextual information within image frames with a lightweight computation; S GST models the semantic correlation separately for the foreground and background.
Compared with the baseline that uses vanilla Transformers for multi-stage fusion, ours significantly increase the speed by 13 times and achieves new state-of-the-art ZVOS performance.
arXiv Detail & Related papers (2023-08-13T06:12:00Z) - MATNet: Multi-Level Fusion Transformer-Based Model for Day-Ahead PV
Generation Forecasting [0.47518865271427785]
MATNet is a novel self-attention transformer-based architecture for PV power generation forecasting.
It consists of a hybrid approach that combines the AI paradigm with the prior physical knowledge of PV power generation.
Results show that our proposed architecture significantly outperforms the current state-of-the-art methods.
arXiv Detail & Related papers (2023-06-17T14:03:09Z) - Towards Long-Term Time-Series Forecasting: Feature, Pattern, and
Distribution [57.71199089609161]
Long-term time-series forecasting (LTTF) has become a pressing demand in many applications, such as wind power supply planning.
Transformer models have been adopted to deliver high prediction capacity because of the high computational self-attention mechanism.
We propose an efficient Transformerbased model, named Conformer, which differentiates itself from existing methods for LTTF in three aspects.
arXiv Detail & Related papers (2023-01-05T13:59:29Z) - AutoPV: Automated photovoltaic forecasts with limited information using
an ensemble of pre-trained models [0.20999222360659608]
We propose a new method for day-ahead PV power generation forecasts called AutoPV.
AutoPV is a weighted ensemble of forecasting models that represent different PV mounting configurations.
For a real-world data set with 11 PV plants, the accuracy of AutoPV is comparable to a model trained on two years of data and outperforms an incrementally trained model.
arXiv Detail & Related papers (2022-12-13T18:29:03Z) - Solar Power Time Series Forecasting Utilising Wavelet Coefficients [0.8602553195689513]
The aim of this study is to improve the efficiency of applying Wavelet Transform (WT) by proposing a new method that uses a single simplified model.
Given a time series and its Wavelet Transform (WT) coefficients, it trains one model with the coefficients as features and the original time series as labels.
The proposed approach is evaluated using 17 months of aggregated solar Photovoltaic (PV) power data from two real-world datasets.
arXiv Detail & Related papers (2022-10-01T13:02:43Z) - Towards Lightweight Transformer via Group-wise Transformation for
Vision-and-Language Tasks [126.33843752332139]
We introduce Group-wise Transformation towards a universal yet lightweight Transformer for vision-and-language tasks, termed as LW-Transformer.
We apply LW-Transformer to a set of Transformer-based networks, and quantitatively measure them on three vision-and-language tasks and six benchmark datasets.
Experimental results show that while saving a large number of parameters and computations, LW-Transformer achieves very competitive performance against the original Transformer networks for vision-and-language tasks.
arXiv Detail & Related papers (2022-04-16T11:30:26Z) - AdaViT: Adaptive Vision Transformers for Efficient Image Recognition [78.07924262215181]
We introduce AdaViT, an adaptive framework that learns to derive usage policies on which patches, self-attention heads and transformer blocks to use.
Our method obtains more than 2x improvement on efficiency compared to state-of-the-art vision transformers with only 0.8% drop of accuracy.
arXiv Detail & Related papers (2021-11-30T18:57:02Z) - Spatio-temporal graph neural networks for multi-site PV power
forecasting [0.0]
We present two novel graph neural network models for deterministic multi-site forecasting.
The proposed models outperform state-of-the-art multi-site forecasting methods for prediction horizons of six hours ahead.
arXiv Detail & Related papers (2021-07-29T10:15:01Z) - PVT v2: Improved Baselines with Pyramid Vision Transformer [112.0139637538858]
We improve the original Pyramid Vision Transformer (PVT v1)
PVT v2 reduces the computational complexity of PVT v1 to linear.
It achieves significant improvements on fundamental vision tasks such as classification, detection, and segmentation.
arXiv Detail & Related papers (2021-06-25T17:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.