A Deep-Learning Intelligent System Incorporating Data Augmentation for
Short-Term Voltage Stability Assessment of Power Systems
- URL: http://arxiv.org/abs/2112.03265v1
- Date: Sun, 5 Dec 2021 11:40:54 GMT
- Title: A Deep-Learning Intelligent System Incorporating Data Augmentation for
Short-Term Voltage Stability Assessment of Power Systems
- Authors: Yang Li, Meng Zhang, Chen Chen
- Abstract summary: This paper proposes a novel deep-learning intelligent system incorporating data augmentation for STVSA of power systems.
Due to the unavailability of reliable quantitative criteria to judge the stability status for a specific power system, semi-supervised cluster learning is leveraged to obtain labeled samples.
conditional least squares generative adversarial networks (LSGAN)-based data augmentation is introduced to expand the original dataset.
- Score: 9.299576471941753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facing the difficulty of expensive and trivial data collection and
annotation, how to make a deep learning-based short-term voltage stability
assessment (STVSA) model work well on a small training dataset is a challenging
and urgent problem. Although a big enough dataset can be directly generated by
contingency simulation, this data generation process is usually cumbersome and
inefficient; while data augmentation provides a low-cost and efficient way to
artificially inflate the representative and diversified training datasets with
label preserving transformations. In this respect, this paper proposes a novel
deep-learning intelligent system incorporating data augmentation for STVSA of
power systems. First, due to the unavailability of reliable quantitative
criteria to judge the stability status for a specific power system,
semi-supervised cluster learning is leveraged to obtain labeled samples in an
original small dataset. Second, to make deep learning applicable to the small
dataset, conditional least squares generative adversarial networks
(LSGAN)-based data augmentation is introduced to expand the original dataset
via artificially creating additional valid samples. Third, to extract temporal
dependencies from the post-disturbance dynamic trajectories of a system, a
bi-directional gated recurrent unit with attention mechanism based assessment
model is established, which bi-directionally learns the significant time
dependencies and automatically allocates attention weights. The test results
demonstrate the presented approach manages to achieve better accuracy and a
faster response time with original small datasets. Besides classification
accuracy, this work employs statistical measures to comprehensively examine the
performance of the proposal.
Related papers
- Meta-Statistical Learning: Supervised Learning of Statistical Inference [59.463430294611626]
This work demonstrates that the tools and principles driving the success of large language models (LLMs) can be repurposed to tackle distribution-level tasks.
We propose meta-statistical learning, a framework inspired by multi-instance learning that reformulates statistical inference tasks as supervised learning problems.
arXiv Detail & Related papers (2025-02-17T18:04:39Z) - Generative Modeling and Data Augmentation for Power System Production Simulation [0.0]
This paper proposes a generative model-assisted approach for load forecasting under small sample scenarios.
The expanded dataset significantly reduces forecasting errors compared to the original dataset.
The diffusion model outperforms the generative adversarial model by achieving about 200 times smaller errors.
arXiv Detail & Related papers (2024-12-10T12:38:47Z) - Learnable Sparse Customization in Heterogeneous Edge Computing [27.201987866208484]
We propose Learnable Personalized Sparsification for heterogeneous Federated learning (FedLPS)
FedLPS learns the importance of model units on local data representation and derives an importance-based sparse pattern with minimals to accurately extract personalized data features.
Experiments show that FedLPS outperforms status quo approaches in accuracy and training costs.
arXiv Detail & Related papers (2024-12-10T06:14:31Z) - Large-Scale Dataset Pruning in Adversarial Training through Data Importance Extrapolation [1.3124513975412255]
We propose a new data pruning strategy based on extrapolating data importance scores from a small set of data to a larger set.
In an empirical evaluation, we demonstrate that extrapolation-based pruning can efficiently reduce dataset size while maintaining robustness.
arXiv Detail & Related papers (2024-06-19T07:23:51Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - TRIAGE: Characterizing and auditing training data for improved
regression [80.11415390605215]
We introduce TRIAGE, a novel data characterization framework tailored to regression tasks and compatible with a broad class of regressors.
TRIAGE utilizes conformal predictive distributions to provide a model-agnostic scoring method, the TRIAGE score.
We show that TRIAGE's characterization is consistent and highlight its utility to improve performance via data sculpting/filtering, in multiple regression settings.
arXiv Detail & Related papers (2023-10-29T10:31:59Z) - STAR: Boosting Low-Resource Information Extraction by Structure-to-Text
Data Generation with Large Language Models [56.27786433792638]
STAR is a data generation method that leverages Large Language Models (LLMs) to synthesize data instances.
We design fine-grained step-by-step instructions to obtain the initial data instances.
Our experiments show that the data generated by STAR significantly improve the performance of low-resource event extraction and relation extraction tasks.
arXiv Detail & Related papers (2023-05-24T12:15:19Z) - A Data-Centric Approach for Training Deep Neural Networks with Less Data [1.9014535120129343]
This paper summarizes our winning submission to the "Data-Centric AI" competition.
We discuss some of the challenges that arise while training with a small dataset.
We propose a GAN-based solution for synthesizing new data points.
arXiv Detail & Related papers (2021-10-07T16:41:52Z) - The Imaginative Generative Adversarial Network: Automatic Data
Augmentation for Dynamic Skeleton-Based Hand Gesture and Human Action
Recognition [27.795763107984286]
We present a novel automatic data augmentation model, which approximates the distribution of the input data and samples new data from this distribution.
Our results show that the augmentation strategy is fast to train and can improve classification accuracy for both neural networks and state-of-the-art methods.
arXiv Detail & Related papers (2021-05-27T11:07:09Z) - Adaptive Weighting Scheme for Automatic Time-Series Data Augmentation [79.47771259100674]
We present two sample-adaptive automatic weighting schemes for data augmentation.
We validate our proposed methods on a large, noisy financial dataset and on time-series datasets from the UCR archive.
On the financial dataset, we show that the methods in combination with a trading strategy lead to improvements in annualized returns of over 50$%$, and on the time-series data we outperform state-of-the-art models on over half of the datasets, and achieve similar performance in accuracy on the others.
arXiv Detail & Related papers (2021-02-16T17:50:51Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.