Fast Block Linear System Solver Using Q-Learning Schduling for Unified
Dynamic Power System Simulations
- URL: http://arxiv.org/abs/2110.05843v1
- Date: Tue, 12 Oct 2021 09:10:27 GMT
- Title: Fast Block Linear System Solver Using Q-Learning Schduling for Unified
Dynamic Power System Simulations
- Authors: Yingshi Chen and Xinli Song and HanYang Dai and Tao Liu and Wuzhi
Zhong and Guoyang Wu
- Abstract summary: This solver uses a novel Q-learning based method for task scheduling.
The simulation on some large power systems shows that our solver is 2-6 times faster than KLU.
- Score: 2.1509980377118767
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a fast block direct solver for the unified dynamic simulations of
power systems. This solver uses a novel Q-learning based method for task
scheduling. Unified dynamic simulations of power systems represent a method in
which the electric-mechanical transient, medium-term and long-term dynamic
phenomena are organically united. Due to the high rank and large numbers in
solving, fast solution of these equations is the key to speeding up the
simulation. The sparse systems of simulation contain complex nested block
structure, which could be used by the solver to speed up. For the scheduling of
blocks and frontals in the solver, we use a learning based task-tree scheduling
technique in the framework of Markov Decision Process. That is, we could learn
optimal scheduling strategies by offline training on many sample matrices. Then
for any systems, the solver would get optimal task partition and scheduling on
the learned model. Our learning-based algorithm could help improve the
performance of sparse solver, which has been verified in some numerical
experiments. The simulation on some large power systems shows that our solver
is 2-6 times faster than KLU, which is the state-of-the-art sparse solver for
circuit simulation problems.
Related papers
- Data-Driven H-infinity Control with a Real-Time and Efficient
Reinforcement Learning Algorithm: An Application to Autonomous
Mobility-on-Demand Systems [3.5897534810405403]
This paper presents a model-free, real-time, data-efficient Q-learning-based algorithm to solve the H$_infty$ control of linear discrete-time systems.
An adaptive optimal controller is designed and the parameters of the action and critic networks are learned online without the knowledge of the system dynamics.
arXiv Detail & Related papers (2023-09-16T05:02:41Z) - Algorithms for perturbative analysis and simulation of quantum dynamics [0.0]
We develop general purpose algorithms for computing and utilizing both the Dyson series and Magnus expansion.
We demonstrate how to use these tools to approximate fidelity in a region of model parameter space.
We show how the pre-computation step can be phrased as a multivariable expansion problem with fewer terms than in the original method.
arXiv Detail & Related papers (2022-10-20T21:07:47Z) - Constructing Optimal Contraction Trees for Tensor Network Quantum
Circuit Simulation [1.2704529528199062]
One of the key problems in quantum circuit simulation is the construction of a contraction tree.
We introduce a novel time algorithm for constructing an optimal contraction tree.
We show that our method achieves superior results on a majority of tested quantum circuits.
arXiv Detail & Related papers (2022-09-07T02:50:30Z) - Accelerated Policy Learning with Parallel Differentiable Simulation [59.665651562534755]
We present a differentiable simulator and a new policy learning algorithm (SHAC)
Our algorithm alleviates problems with local minima through a smooth critic function.
We show substantial improvements in sample efficiency and wall-clock time over state-of-the-art RL and differentiable simulation-based algorithms.
arXiv Detail & Related papers (2022-04-14T17:46:26Z) - Efficient Differentiable Simulation of Articulated Bodies [89.64118042429287]
We present a method for efficient differentiable simulation of articulated bodies.
This enables integration of articulated body dynamics into deep learning frameworks.
We show that reinforcement learning with articulated systems can be accelerated using gradients provided by our method.
arXiv Detail & Related papers (2021-09-16T04:48:13Z) - PlasticineLab: A Soft-Body Manipulation Benchmark with Differentiable
Physics [89.81550748680245]
We introduce a new differentiable physics benchmark called PasticineLab.
In each task, the agent uses manipulators to deform the plasticine into the desired configuration.
We evaluate several existing reinforcement learning (RL) methods and gradient-based methods on this benchmark.
arXiv Detail & Related papers (2021-04-07T17:59:23Z) - Accelerating GMRES with Deep Learning in Real-Time [0.0]
We show a real-time machine learning algorithm that can be used to accelerate the time-to-solution for GMRES.
Our framework is novel in that is integrates the deep learning algorithm in an in situ fashion.
arXiv Detail & Related papers (2021-03-19T18:21:38Z) - Fast and differentiable simulation of driven quantum systems [58.720142291102135]
We introduce a semi-analytic method based on the Dyson expansion that allows us to time-evolve driven quantum systems much faster than standard numerical methods.
We show results of the optimization of a two-qubit gate using transmon qubits in the circuit QED architecture.
arXiv Detail & Related papers (2020-12-16T21:43:38Z) - Hamilton-Jacobi Deep Q-Learning for Deterministic Continuous-Time
Systems with Lipschitz Continuous Controls [2.922007656878633]
We propose Q-learning algorithms for continuous-time deterministic optimal control problems with Lipschitz continuous controls.
A novel semi-discrete version of the HJB equation is proposed to design a Q-learning algorithm that uses data collected in discrete time without discretizing or approximating the system dynamics.
arXiv Detail & Related papers (2020-10-27T06:11:04Z) - Reinforcement Learning with Fast Stabilization in Linear Dynamical
Systems [91.43582419264763]
We study model-based reinforcement learning (RL) in unknown stabilizable linear dynamical systems.
We propose an algorithm that certifies fast stabilization of the underlying system by effectively exploring the environment.
We show that the proposed algorithm attains $tildemathcalO(sqrtT)$ regret after $T$ time steps of agent-environment interaction.
arXiv Detail & Related papers (2020-07-23T23:06:40Z) - Physarum Powered Differentiable Linear Programming Layers and
Applications [48.77235931652611]
We propose an efficient and differentiable solver for general linear programming problems.
We show the use of our solver in a video segmentation task and meta-learning for few-shot learning.
arXiv Detail & Related papers (2020-04-30T01:50:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.