Enriching Neural Network Training Dataset to Improve Worst-Case
Performance Guarantees
- URL: http://arxiv.org/abs/2303.13228v1
- Date: Thu, 23 Mar 2023 12:59:37 GMT
- Title: Enriching Neural Network Training Dataset to Improve Worst-Case
Performance Guarantees
- Authors: Rahul Nellikkath, Spyros Chatzivasileiadis
- Abstract summary: We show that adapting the NN training dataset during training can improve the NN performance and substantially reduce its worst-case violations.
This paper proposes an algorithm that identifies and enriches the training dataset with critical datapoints that reduce the worst-case violations and deliver a neural network with improved worst-case performance guarantees.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Machine learning algorithms, especially Neural Networks (NNs), are a valuable
tool used to approximate non-linear relationships, like the AC-Optimal Power
Flow (AC-OPF), with considerable accuracy -- and achieving a speedup of several
orders of magnitude when deployed for use. Often in power systems literature,
the NNs are trained with a fixed dataset generated prior to the training
process. In this paper, we show that adapting the NN training dataset during
training can improve the NN performance and substantially reduce its worst-case
violations. This paper proposes an algorithm that identifies and enriches the
training dataset with critical datapoints that reduce the worst-case violations
and deliver a neural network with improved worst-case performance guarantees.
We demonstrate the performance of our algorithm in four test power systems,
ranging from 39-buses to 162-buses.
Related papers
- Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch [72.26822499434446]
Auto-Train-Once (ATO) is an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs.
We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures.
arXiv Detail & Related papers (2024-03-21T02:33:37Z) - Efficient Uncertainty Quantification and Reduction for
Over-Parameterized Neural Networks [23.7125322065694]
Uncertainty quantification (UQ) is important for reliability assessment and enhancement of machine learning models.
We create statistically guaranteed schemes to principally emphcharacterize, and emphremove, the uncertainty of over- parameterized neural networks.
In particular, our approach, based on what we call a procedural-noise-correcting (PNC) predictor, removes the procedural uncertainty by using only emphone auxiliary network that is trained on a suitably labeled dataset.
arXiv Detail & Related papers (2023-06-09T05:15:53Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Recurrent Bilinear Optimization for Binary Neural Networks [58.972212365275595]
BNNs neglect the intrinsic bilinear relationship of real-valued weights and scale factors.
Our work is the first attempt to optimize BNNs from the bilinear perspective.
We obtain robust RBONNs, which show impressive performance over state-of-the-art BNNs on various models and datasets.
arXiv Detail & Related papers (2022-09-04T06:45:33Z) - Physics-Informed Neural Networks for AC Optimal Power Flow [0.0]
This paper introduces, for the first time, physics-informed neural networks to accurately estimate the AC-OPF result.
We show how physics-informed neural networks achieve higher accuracy and lower constraint violations than standard neural networks.
arXiv Detail & Related papers (2021-10-06T11:44:59Z) - Physics-Informed Neural Networks for Minimising Worst-Case Violations in
DC Optimal Power Flow [0.0]
Physics-informed neural networks exploit the existing models of the underlying physical systems to generate higher accuracy results with fewer data.
Such approaches can help drastically reduce the computation time and generate a good estimate of computationally intensive processes in power systems.
Such neural networks can be applied in safety-critical applications in power systems and build a high level of trust among power system operators.
arXiv Detail & Related papers (2021-06-28T10:45:22Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - Analytically Tractable Inference in Deep Neural Networks [0.0]
Tractable Approximate Inference (TAGI) algorithm was shown to be a viable and scalable alternative to backpropagation for shallow fully-connected neural networks.
We are demonstrating how TAGI matches or exceeds the performance of backpropagation, for training classic deep neural network architectures.
arXiv Detail & Related papers (2021-03-09T14:51:34Z) - Dynamic Hard Pruning of Neural Networks at the Edge of the Internet [11.605253906375424]
Dynamic Hard Pruning (DynHP) technique incrementally prunes the network during training.
DynHP enables a tunable size reduction of the final neural network and reduces the NN memory occupancy during training.
Freed memory is reused by a emphdynamic batch sizing approach to counterbalance the accuracy degradation caused by the hard pruning strategy.
arXiv Detail & Related papers (2020-11-17T10:23:28Z) - Rapid Structural Pruning of Neural Networks with Set-based Task-Adaptive
Meta-Pruning [83.59005356327103]
A common limitation of most existing pruning techniques is that they require pre-training of the network at least once before pruning.
We propose STAMP, which task-adaptively prunes a network pretrained on a large reference dataset by generating a pruning mask on it as a function of the target dataset.
We validate STAMP against recent advanced pruning methods on benchmark datasets.
arXiv Detail & Related papers (2020-06-22T10:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.