Tractable learning in under-excited power grids
- URL: http://arxiv.org/abs/2005.01818v1
- Date: Mon, 4 May 2020 19:54:48 GMT
- Title: Tractable learning in under-excited power grids
- Authors: Deepjyoti Deka, Harish Doddi, Sidhant Misra, Murti Salapaka
- Abstract summary: We propose a novel learning algorithm for learning underexcited general (non-radial) networks based on physics-informed conservation laws.
We prove the correctness of our algorithm for grids with non-adjacent under-excited internal nodes.
Our approach is validated through simulations with non-linear voltage samples generated on test grids with real injection data.
- Score: 4.568911586155097
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimating the structure of physical flow networks such as power grids is
critical to secure delivery of energy. This paper discusses statistical
structure estimation in power grids in the "under-excited" regime, where a
subset of internal nodes do not have external injection. Prior estimation
algorithms based on nodal potentials or voltages fail in the under-excited
regime. We propose a novel topology learning algorithm for learning
underexcited general (non-radial) networks based on physics-informed
conservation laws. We prove the asymptotic correctness of our algorithm for
grids with non-adjacent under-excited internal nodes. More importantly, we
theoretically analyze our algorithm's efficacy under noisy measurements, and
determine bounds on maximum noise under which asymptotically correct recovery
is guaranteed. Our approach is validated through simulations with non-linear
voltage samples generated on test grids with real injection data
Related papers
- Learning-Based Verification of Stochastic Dynamical Systems with Neural Network Policies [7.9898826915621965]
We use a verification procedure that trains another neural network, which acts as a certificate proving that the policy satisfies the task.
For reach-avoid tasks, it suffices to show that this certificate network is a reach-avoid supermartingale (RASM)
arXiv Detail & Related papers (2024-06-02T18:19:19Z) - Robust Stochastically-Descending Unrolled Networks [85.6993263983062]
Deep unrolling is an emerging learning-to-optimize method that unrolls a truncated iterative algorithm in the layers of a trainable neural network.
We show that convergence guarantees and generalizability of the unrolled networks are still open theoretical problems.
We numerically assess unrolled architectures trained under the proposed constraints in two different applications.
arXiv Detail & Related papers (2023-12-25T18:51:23Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Inferring networks from time series: a neural approach [3.115375810642661]
We present a powerful computational method to infer large network adjacency matrices from time series data using a neural network.
We demonstrate our capabilities by inferring line failure locations in the British power grid from its response to a power cut.
arXiv Detail & Related papers (2023-03-30T15:51:01Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Network Gradient Descent Algorithm for Decentralized Federated Learning [0.2867517731896504]
We study a fully decentralized federated learning algorithm, which is a novel descent gradient algorithm executed on a communication-based network.
In the NGD method, only statistics (e.g., parameter estimates) need to be communicated, minimizing the risk of privacy.
We find that both the learning rate and the network structure play significant roles in determining the NGD estimator's statistical efficiency.
arXiv Detail & Related papers (2022-05-06T02:53:31Z) - A scalable multi-step least squares method for network identification
with unknown disturbance topology [0.0]
We present an identification method for dynamic networks with known network topology.
We use a multi-step Sequential and Null Space Fitting method to deal with reduced rank noise.
We provide a consistency proof that includes explicit-based Box model structure informativity.
arXiv Detail & Related papers (2021-06-14T16:12:49Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - State Estimation of Power Flows for Smart Grids via Belief Propagation [0.0]
Belief propagation is an algorithm that is known from statistical physics and computer science.
We show that belief propagation scales linearly with the grid size for the state estimation itself.
It also facilitates and accelerates the retrieval of missing data and allows an optimized positioning of measurement units.
arXiv Detail & Related papers (2020-12-18T19:22:03Z) - Constant-Expansion Suffices for Compressed Sensing with Generative
Priors [26.41623833920794]
We prove a novel uniform concentration for random functions that might not beschitz but satisfy a relaxed notion of Lipe-theoreticalness.
Since the WDC is a fundamental concentration inequality inequality of all existing theoretical guarantees on this problem, our bound improvements in all known results in the heart on with priors, including one, low-bit recovery, and more.
arXiv Detail & Related papers (2020-06-07T19:14:41Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.