Versatile and Robust Transient Stability Assessment via Instance
Transfer Learning
- URL: http://arxiv.org/abs/2102.10296v1
- Date: Sat, 20 Feb 2021 09:10:29 GMT
- Title: Versatile and Robust Transient Stability Assessment via Instance
Transfer Learning
- Authors: Seyedali Meghdadi, Guido Tack, Ariel Liebman, Nicolas Langren\'e,
Christoph Bergmeir
- Abstract summary: This paper introduces a new data collection method in a data-driven algorithm incorporating the knowledge of power system dynamics.
We introduce a new concept called Fault-Affected Area, which provides crucial information regarding the unstable region of operation.
The test results on the IEEE 39-bus system verify that this model can accurately predict the stability of previously unseen operational scenarios.
- Score: 6.760999627905228
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To support N-1 pre-fault transient stability assessment, this paper
introduces a new data collection method in a data-driven algorithm
incorporating the knowledge of power system dynamics. The domain knowledge on
how the disturbance effect will propagate from the fault location to the rest
of the network is leveraged to recognise the dominant conditions that determine
the stability of a system. Accordingly, we introduce a new concept called
Fault-Affected Area, which provides crucial information regarding the unstable
region of operation. This information is embedded in an augmented dataset to
train an ensemble model using an instance transfer learning framework. The test
results on the IEEE 39-bus system verify that this model can accurately predict
the stability of previously unseen operational scenarios while reducing the
risk of false prediction of unstable instances compared to standard approaches.
Related papers
- Stability Evaluation via Distributional Perturbation Analysis [28.379994938809133]
We propose a stability evaluation criterion based on distributional perturbations.
Our stability evaluation criterion can address both emphdata corruptions and emphsub-population shifts.
Empirically, we validate the practical utility of our stability evaluation criterion across a host of real-world applications.
arXiv Detail & Related papers (2024-05-06T06:47:14Z) - Stable Update of Regression Trees [0.0]
We focus on the stability of an inherently explainable machine learning method, namely regression trees.
We propose a regularization method, where data points are weighted based on the uncertainty in the initial model.
Results show that the proposed update method improves stability while achieving similar or better predictive performance.
arXiv Detail & Related papers (2024-02-21T09:41:56Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Measuring and Mitigating Local Instability in Deep Neural Networks [23.342675028217762]
We study how the predictions of a model change, even when it is retrained on the same data, as a consequence of principledity in the training process.
For Natural Language Understanding (NLU) tasks, we find instability in predictions for a significant fraction of queries.
We propose new data-centric methods that exploit our local stability estimates.
arXiv Detail & Related papers (2023-05-18T00:34:15Z) - Minimax Optimal Estimation of Stability Under Distribution Shift [8.893526921869137]
We analyze the stability of a system under distribution shift.
The stability measure is defined in terms of a more intuitive quantity: the level of acceptable performance degradation.
Our characterization of the minimax convergence rate shows that evaluating stability against large performance degradation incurs a statistical cost.
arXiv Detail & Related papers (2022-12-13T02:40:30Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - Robust Stability of Neural-Network Controlled Nonlinear Systems with
Parametric Variability [2.0199917525888895]
We develop a theory for stability and stabilizability of a class of neural-network controlled nonlinear systems.
For computing such a robust stabilizing NN controller, a stability guaranteed training (SGT) is also proposed.
arXiv Detail & Related papers (2021-09-13T05:09:30Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Fine-Grained Analysis of Stability and Generalization for Stochastic
Gradient Descent [55.85456985750134]
We introduce a new stability measure called on-average model stability, for which we develop novel bounds controlled by the risks of SGD iterates.
This yields generalization bounds depending on the behavior of the best model, and leads to the first-ever-known fast bounds in the low-noise setting.
To our best knowledge, this gives the firstever-known stability and generalization for SGD with even non-differentiable loss functions.
arXiv Detail & Related papers (2020-06-15T06:30:19Z) - Balance-Subsampled Stable Prediction [55.13512328954456]
We propose a novel balance-subsampled stable prediction (BSSP) algorithm based on the theory of fractional factorial design.
A design-theoretic analysis shows that the proposed method can reduce the confounding effects among predictors induced by the distribution shift.
Numerical experiments on both synthetic and real-world data sets demonstrate that our BSSP algorithm significantly outperforms the baseline methods for stable prediction across unknown test data.
arXiv Detail & Related papers (2020-06-08T07:01:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.