An Intelligent Control Strategy for buck DC-DC Converter via Deep
Reinforcement Learning
- URL: http://arxiv.org/abs/2008.04542v1
- Date: Tue, 11 Aug 2020 06:38:53 GMT
- Title: An Intelligent Control Strategy for buck DC-DC Converter via Deep
Reinforcement Learning
- Authors: Chenggang Cui, Nan Yan, Chuanlin Zhang
- Abstract summary: An innovative intelligent control strategy for buck DC-DC converter with constant power loads (CPLs) is constructed for the first time.
A Markov Decision Process (MDP) model and the deep Q network (DQN) algorithm are defined for DC-DC converter.
A model-free based deep reinforcement learning (DRL) control strategy is appropriately designed to adjust the agent-environment interaction.
- Score: 1.4502611532302039
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As a typical switching power supply, the DC-DC converter has been widely
applied in DC microgrid. Due to the variation of renewable energy generation,
research and design of DC-DC converter control algorithm with outstanding
dynamic characteristics has significant theoretical and practical application
value. To mitigate the bus voltage stability issue in DC microgrid, an
innovative intelligent control strategy for buck DC-DC converter with constant
power loads (CPLs) via deep reinforcement learning algorithm is constructed for
the first time. In this article, a Markov Decision Process (MDP) model and the
deep Q network (DQN) algorithm are defined for DC-DC converter. A model-free
based deep reinforcement learning (DRL) control strategy is appropriately
designed to adjust the agent-environment interaction through the
rewards/penalties mechanism towards achieving converge to nominal voltage. The
agent makes approximate decisions by extracting the high-dimensional feature of
complex power systems without any prior knowledge. Eventually, the simulation
comparison results demonstrate that the proposed controller has stronger
self-learning and self-optimization capabilities under the different scenarios.
Related papers
- Function Approximation for Reinforcement Learning Controller for Energy from Spread Waves [69.9104427437916]
Multi-generator Wave Energy Converters (WEC) must handle multiple simultaneous waves coming from different directions called spread waves.
These complex devices need controllers with multiple objectives of energy capture efficiency, reduction of structural stress to limit maintenance, and proactive protection against high waves.
In this paper, we explore different function approximations for the policy and critic networks in modeling the sequential nature of the system dynamics.
arXiv Detail & Related papers (2024-04-17T02:04:10Z) - Parameter-Adaptive Approximate MPC: Tuning Neural-Network Controllers without Retraining [50.00291020618743]
This work introduces a novel, parameter-adaptive AMPC architecture capable of online tuning without recomputing large datasets and retraining.
We showcase the effectiveness of parameter-adaptive AMPC by controlling the swing-ups of two different real cartpole systems with a severely resource-constrained microcontroller (MCU)
Taken together, these contributions represent a marked step toward the practical application of AMPC in real-world systems.
arXiv Detail & Related papers (2024-04-08T20:02:19Z) - Rethinking Decision Transformer via Hierarchical Reinforcement Learning [54.3596066989024]
Decision Transformer (DT) is an innovative algorithm leveraging recent advances of the transformer architecture in reinforcement learning (RL)
We introduce a general sequence modeling framework for studying sequential decision making through the lens of Hierarchical RL.
We show DT emerges as a special case of this framework with certain choices of high-level and low-level policies, and discuss the potential failure of these choices.
arXiv Detail & Related papers (2023-11-01T03:32:13Z) - Active RIS-aided EH-NOMA Networks: A Deep Reinforcement Learning
Approach [66.53364438507208]
An active reconfigurable intelligent surface (RIS)-aided multi-user downlink communication system is investigated.
Non-orthogonal multiple access (NOMA) is employed to improve spectral efficiency, and the active RIS is powered by energy harvesting (EH)
An advanced LSTM based algorithm is developed to predict users' dynamic communication state.
A DDPG based algorithm is proposed to joint control the amplification matrix and phase shift matrix RIS.
arXiv Detail & Related papers (2023-04-11T13:16:28Z) - Stabilizing Voltage in Power Distribution Networks via Multi-Agent
Reinforcement Learning with Transformer [128.19212716007794]
We propose a Transformer-based Multi-Agent Actor-Critic framework (T-MAAC) to stabilize voltage in power distribution networks.
In addition, we adopt a novel auxiliary-task training process tailored to the voltage control task, which improves the sample efficiency.
arXiv Detail & Related papers (2022-06-08T07:48:42Z) - Artificial Neural Network-Based Voltage Control of DC/DC Converter for
DC Microgrid Applications [2.15242029196761]
An artificial neural network (ANN) based voltage control strategy is proposed for the DC-DC boost converter.
The accuracy of the trained ANN model is about 97%, which makes it suitable to be used for DC applications.
arXiv Detail & Related papers (2021-11-05T01:20:27Z) - Transferring Reinforcement Learning for DC-DC Buck Converter Control via
Duty Ratio Mapping: From Simulation to Implementation [0.0]
This paper presents a transferring methodology via a delicately designed duty ratio mapping (DRM) for a DC-DC buck converter.
A detailed sim-to-real process is presented to enable the implementation of a model-free deep reinforcement learning (DRL) controller.
arXiv Detail & Related papers (2021-10-20T11:08:17Z) - Adaptive Energy Management for Real Driving Conditions via Transfer
Reinforcement Learning [19.383907178714345]
This article proposes a transfer reinforcement learning (RL) based adaptive energy managing approach for a hybrid electric vehicle (HEV) with parallel topology.
The up-level characterizes how to transform the Q-value tables in the RL framework via driving cycle transformation (DCT)
The lower-level determines how to set the corresponding control strategies with the transformed Q-value tables and TPMs.
arXiv Detail & Related papers (2020-07-24T15:06:23Z) - Optimization-driven Deep Reinforcement Learning for Robust Beamforming
in IRS-assisted Wireless Communications [54.610318402371185]
Intelligent reflecting surface (IRS) is a promising technology to assist downlink information transmissions from a multi-antenna access point (AP) to a receiver.
We minimize the AP's transmit power by a joint optimization of the AP's active beamforming and the IRS's passive beamforming.
We propose a deep reinforcement learning (DRL) approach that can adapt the beamforming strategies from past experiences.
arXiv Detail & Related papers (2020-05-25T01:42:55Z) - Reinforcement Learning for Thermostatically Controlled Loads Control
using Modelica and Python [0.0]
The project aims to investigate and assess opportunities for applying reinforcement learning (RL) for power system control.
As a proof of concept (PoC), voltage control of thermostatically controlled loads (TCLs) for power consumption was developed using Modelica-based pipeline.
The paper shows the influence of Q-learning parameters, including discretization of state-action space, on the controller performance.
arXiv Detail & Related papers (2020-05-09T13:35:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.