A Deep-Learning Intelligent System Incorporating Data Augmentation for
Short-Term Voltage Stability Assessment of Power Systems
- URL: http://arxiv.org/abs/2112.03265v1
- Date: Sun, 5 Dec 2021 11:40:54 GMT
- Title: A Deep-Learning Intelligent System Incorporating Data Augmentation for
Short-Term Voltage Stability Assessment of Power Systems
- Authors: Yang Li, Meng Zhang, Chen Chen
- Abstract summary: This paper proposes a novel deep-learning intelligent system incorporating data augmentation for STVSA of power systems.
Due to the unavailability of reliable quantitative criteria to judge the stability status for a specific power system, semi-supervised cluster learning is leveraged to obtain labeled samples.
conditional least squares generative adversarial networks (LSGAN)-based data augmentation is introduced to expand the original dataset.
- Score: 9.299576471941753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facing the difficulty of expensive and trivial data collection and
annotation, how to make a deep learning-based short-term voltage stability
assessment (STVSA) model work well on a small training dataset is a challenging
and urgent problem. Although a big enough dataset can be directly generated by
contingency simulation, this data generation process is usually cumbersome and
inefficient; while data augmentation provides a low-cost and efficient way to
artificially inflate the representative and diversified training datasets with
label preserving transformations. In this respect, this paper proposes a novel
deep-learning intelligent system incorporating data augmentation for STVSA of
power systems. First, due to the unavailability of reliable quantitative
criteria to judge the stability status for a specific power system,
semi-supervised cluster learning is leveraged to obtain labeled samples in an
original small dataset. Second, to make deep learning applicable to the small
dataset, conditional least squares generative adversarial networks
(LSGAN)-based data augmentation is introduced to expand the original dataset
via artificially creating additional valid samples. Third, to extract temporal
dependencies from the post-disturbance dynamic trajectories of a system, a
bi-directional gated recurrent unit with attention mechanism based assessment
model is established, which bi-directionally learns the significant time
dependencies and automatically allocates attention weights. The test results
demonstrate the presented approach manages to achieve better accuracy and a
faster response time with original small datasets. Besides classification
accuracy, this work employs statistical measures to comprehensively examine the
performance of the proposal.
Related papers
- Large-Scale Dataset Pruning in Adversarial Training through Data Importance Extrapolation [1.3124513975412255]
We propose a new data pruning strategy based on extrapolating data importance scores from a small set of data to a larger set.
In an empirical evaluation, we demonstrate that extrapolation-based pruning can efficiently reduce dataset size while maintaining robustness.
arXiv Detail & Related papers (2024-06-19T07:23:51Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - TRIAGE: Characterizing and auditing training data for improved
regression [80.11415390605215]
We introduce TRIAGE, a novel data characterization framework tailored to regression tasks and compatible with a broad class of regressors.
TRIAGE utilizes conformal predictive distributions to provide a model-agnostic scoring method, the TRIAGE score.
We show that TRIAGE's characterization is consistent and highlight its utility to improve performance via data sculpting/filtering, in multiple regression settings.
arXiv Detail & Related papers (2023-10-29T10:31:59Z) - PMU measurements based short-term voltage stability assessment of power
systems via deep transfer learning [2.1303885995425635]
This paper proposes a novel phasor measurement unit (PMU) measurements-based STVSA method by using deep transfer learning.
It employs temporal ensembling for sample labeling and utilizes least squares generative adversarial networks (LSGAN) for data augmentation, enabling effective deep learning on small-scale datasets.
Experimental results on the IEEE 39-bus test system demonstrate that the proposed method improves model evaluation accuracy by approximately 20% through transfer learning.
arXiv Detail & Related papers (2023-08-07T23:44:35Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Iterative self-transfer learning: A general methodology for response
time-history prediction based on small dataset [0.0]
An iterative self-transfer learningmethod for training neural networks based on small datasets is proposed in this study.
The results show that the proposed method can improve the model performance by near an order of magnitude on small datasets.
arXiv Detail & Related papers (2023-06-14T18:48:04Z) - STAR: Boosting Low-Resource Information Extraction by Structure-to-Text
Data Generation with Large Language Models [56.27786433792638]
STAR is a data generation method that leverages Large Language Models (LLMs) to synthesize data instances.
We design fine-grained step-by-step instructions to obtain the initial data instances.
Our experiments show that the data generated by STAR significantly improve the performance of low-resource event extraction and relation extraction tasks.
arXiv Detail & Related papers (2023-05-24T12:15:19Z) - A Data-Centric Approach for Training Deep Neural Networks with Less Data [1.9014535120129343]
This paper summarizes our winning submission to the "Data-Centric AI" competition.
We discuss some of the challenges that arise while training with a small dataset.
We propose a GAN-based solution for synthesizing new data points.
arXiv Detail & Related papers (2021-10-07T16:41:52Z) - The Imaginative Generative Adversarial Network: Automatic Data
Augmentation for Dynamic Skeleton-Based Hand Gesture and Human Action
Recognition [27.795763107984286]
We present a novel automatic data augmentation model, which approximates the distribution of the input data and samples new data from this distribution.
Our results show that the augmentation strategy is fast to train and can improve classification accuracy for both neural networks and state-of-the-art methods.
arXiv Detail & Related papers (2021-05-27T11:07:09Z) - Adaptive Weighting Scheme for Automatic Time-Series Data Augmentation [79.47771259100674]
We present two sample-adaptive automatic weighting schemes for data augmentation.
We validate our proposed methods on a large, noisy financial dataset and on time-series datasets from the UCR archive.
On the financial dataset, we show that the methods in combination with a trading strategy lead to improvements in annualized returns of over 50$%$, and on the time-series data we outperform state-of-the-art models on over half of the datasets, and achieve similar performance in accuracy on the others.
arXiv Detail & Related papers (2021-02-16T17:50:51Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.