Cluster-based ensemble learning for wind power modeling with
meteorological wind data
- URL: http://arxiv.org/abs/2204.00646v1
- Date: Fri, 1 Apr 2022 18:20:04 GMT
- Title: Cluster-based ensemble learning for wind power modeling with
meteorological wind data
- Authors: Hao Chen
- Abstract summary: This paper constructs a modeling scheme that integrates three types of ensemble learning algorithms, bagging, boosting, and stacking.
It also investigates applications of different clustering algorithms and methodology for determining cluster numbers in wind power modeling.
- Score: 6.385624548310884
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Optimal implementation and monitoring of wind energy generation hinge on
reliable power modeling that is vital for understanding turbine control, farm
operational optimization, and grid load balance. Based on the idea of similar
wind condition leads to similar wind power; this paper constructs a modeling
scheme that orderly integrates three types of ensemble learning algorithms,
bagging, boosting, and stacking, and clustering approaches to achieve optimal
power modeling. It also investigates applications of different clustering
algorithms and methodology for determining cluster numbers in wind power
modeling. The results reveal that all ensemble models with clustering exploit
the intrinsic information of wind data and thus outperform models without it by
approximately 15% on average. The model with the best farthest first clustering
is computationally rapid and performs exceptionally well with an improvement of
around 30%. The modeling is further boosted by about 5% by introducing stacking
that fuses ensembles with varying clusters. The proposed modeling framework
thus demonstrates promise by delivering efficient and robust modeling
performance.
Related papers
- Combining Physics-based and Data-driven Modeling for Building Energy Systems [5.437298646956505]
Building energy modeling plays a vital role in optimizing the operation of building energy systems.
Researchers are combining physics-based and data-driven models into hybrid approaches.
We evaluate four predominant hybrid approaches in building energy modeling through a real-world case study.
arXiv Detail & Related papers (2024-11-01T21:56:39Z) - FusionBench: A Comprehensive Benchmark of Deep Model Fusion [78.80920533793595]
Deep model fusion is a technique that unifies the predictions or parameters of several deep neural networks into a single model.
FusionBench is the first comprehensive benchmark dedicated to deep model fusion.
arXiv Detail & Related papers (2024-06-05T13:54:28Z) - Equipment Health Assessment: Time Series Analysis for Wind Turbine
Performance [1.533848041901807]
We leverage SCADA data from diverse wind turbines to predict power output, employing advanced time series methods.
A key innovation lies in the ensemble of FNN and LSTM models, capitalizing on their collective learning.
Machine learning techniques are applied to detect wind turbine performance deterioration, enabling proactive maintenance strategies.
arXiv Detail & Related papers (2024-03-01T20:54:31Z) - RAVEN: Rethinking Adversarial Video Generation with Efficient Tri-plane Networks [93.18404922542702]
We present a novel video generative model designed to address long-term spatial and temporal dependencies.
Our approach incorporates a hybrid explicit-implicit tri-plane representation inspired by 3D-aware generative frameworks.
Our model synthesizes high-fidelity video clips at a resolution of $256times256$ pixels, with durations extending to more than $5$ seconds at a frame rate of 30 fps.
arXiv Detail & Related papers (2024-01-11T16:48:44Z) - A Lightweight Feature Fusion Architecture For Resource-Constrained Crowd
Counting [3.5066463427087777]
We introduce two lightweight models to enhance the versatility of crowd-counting models.
These models maintain the same downstream architecture while incorporating two distinct backbones: MobileNet and MobileViT.
We leverage Adjacent Feature Fusion to extract diverse scale features from a Pre-Trained Model (PTM) and subsequently combine these features seamlessly.
arXiv Detail & Related papers (2024-01-11T15:13:31Z) - Modeling Wind Turbine Performance and Wake Interactions with Machine
Learning [0.0]
Different machine learning (ML) models are trained on SCADA and meteorological data collected at an onshore wind farm.
ML methods for data quality control and pre-processing are applied to the data set under investigation.
A hybrid model is found to achieve high accuracy for modeling wind turbine power capture.
arXiv Detail & Related papers (2022-12-02T23:07:05Z) - End-to-end Wind Turbine Wake Modelling with Deep Graph Representation
Learning [7.850747042819504]
This work proposes a surrogate model for the representation of wind turbine wakes based on a graph representation learning method termed a graph neural network.
The proposed end-to-end deep learning model operates directly on unstructured meshes and has been validated against high-fidelity data.
A case study based upon a real world wind farm further demonstrates the capability of the proposed approach to predict farm scale power generation.
arXiv Detail & Related papers (2022-11-24T15:00:06Z) - Composing Ensembles of Pre-trained Models via Iterative Consensus [95.10641301155232]
We propose a unified framework for composing ensembles of different pre-trained models.
We use pre-trained models as "generators" or "scorers" and compose them via closed-loop iterative consensus optimization.
We demonstrate that consensus achieved by an ensemble of scorers outperforms the feedback of a single scorer.
arXiv Detail & Related papers (2022-10-20T18:46:31Z) - Your Autoregressive Generative Model Can be Better If You Treat It as an
Energy-Based One [83.5162421521224]
We propose a unique method termed E-ARM for training autoregressive generative models.
E-ARM takes advantage of a well-designed energy-based learning objective.
We show that E-ARM can be trained efficiently and is capable of alleviating the exposure bias problem.
arXiv Detail & Related papers (2022-06-26T10:58:41Z) - Sparse MoEs meet Efficient Ensembles [49.313497379189315]
We study the interplay of two popular classes of such models: ensembles of neural networks and sparse mixture of experts (sparse MoEs)
We present Efficient Ensemble of Experts (E$3$), a scalable and simple ensemble of sparse MoEs that takes the best of both classes of models, while using up to 45% fewer FLOPs than a deep ensemble.
arXiv Detail & Related papers (2021-10-07T11:58:35Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.