Efficient Training of Learning-Based Thermal Power Flow for 4th Generation District Heating Grids
- URL: http://arxiv.org/abs/2403.11877v1
- Date: Mon, 18 Mar 2024 15:31:09 GMT
- Title: Efficient Training of Learning-Based Thermal Power Flow for 4th Generation District Heating Grids
- Authors: Andreas Bott, Mario Beykirch, Florian Steinke,
- Abstract summary: We propose a novel, efficient scheme to generate a sufficiently large training data set covering relevant supply and demand values.
Instead of sampling supply and demand values, our approach generates training examples from a proxy distribution over generator and consumer mass flows.
We show with simulations for typical grid structures that the new approach can reduce training set generation times by two orders of magnitude.
- Score: 1.0923877073891446
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Thermal power flow (TPF) is an important task for various control purposes in 4 Th generation district heating grids with multiple decentral heat sources and meshed grid structures. Computing the TPF, i.e., determining the grid state consisting of temperatures, pressures, and mass flows for given supply and demand values, is classically done by solving the nonlinear heat grid equations, but can be sped up by orders of magnitude using learned models such as neural networks. We propose a novel, efficient scheme to generate a sufficiently large training data set covering relevant supply and demand values. Instead of sampling supply and demand values, our approach generates training examples from a proxy distribution over generator and consumer mass flows, omitting the iterations needed for solving the heat grid equations. The exact, but slightly different, training examples can be weighted to represent the original training distribution. We show with simulations for typical grid structures that the new approach can reduce training set generation times by two orders of magnitude compared to sampling supply and demand values directly, without loss of relevance for the training samples. Moreover, learning TPF with a training data set is shown to outperform sample-free, physics-aware training approaches significantly.
Related papers
- A Foundation Model for Massive MIMO Precoding with an Adaptive per-User Rate-Power Tradeoff [4.8310710966636545]
We propose a transformer-based foundation model for mMIMO precoding that seeks to minimize the energy consumption of the transmitter while dynamically adapting to per-user rate requirements.<n>At equal energy consumption, zero-shot deployment of the proposed foundation model significantly outperforms zero forcing, and approaches weighted minimum mean squared error performance with 8x less complexity.<n>Our work enables the implementation of DL-based solutions in practice by addressing challenges of data availability and training complexity.
arXiv Detail & Related papers (2025-07-24T17:10:06Z) - Numerical simulation of transient heat conduction with moving heat source using Physics Informed Neural Networks [0.0]
In this paper, the physics informed neural networks (PINNs) is employed for the numerical simulation of heat transfer involving a moving source.<n>A new training method is proposed that uses a continuous time-stepping through transfer learning.<n>The proposed framework is used to estimate the temperature distribution in a homogeneous medium with a moving heat source.
arXiv Detail & Related papers (2025-06-21T14:51:46Z) - FORT: Forward-Only Regression Training of Normalizing Flows [85.66894616735752]
We revisit classical normalizing flows as one-step generative models with exact likelihoods.<n>We propose a novel, scalable training objective that does not require computing the expensive change of variable formula used in conventional maximum likelihood training.
arXiv Detail & Related papers (2025-06-01T20:32:27Z) - Efficient Generative Model Training via Embedded Representation Warmup [6.783363935446626]
Diffusion models excel at generating high-dimensional data but fall short in training efficiency and representation quality compared to self-supervised methods.
We identify a key bottleneck: the underutilization of high-quality, semantically rich representations during training.
We propose Embedded Representation Warmup (ERW), a plug-and-play framework where in the first stage we get the ERW module.
arXiv Detail & Related papers (2025-04-14T12:43:17Z) - A Bayesian Flow Network Framework for Chemistry Tasks [0.0]
We introduce ChemBFN, a language model that handles chemistry tasks based on Bayesian flow networks.
A new accuracy schedule is proposed to improve the sampling quality.
We show evidence that our method is appropriate for generating molecules with satisfied diversity even when a smaller number of sampling steps is used.
arXiv Detail & Related papers (2024-07-28T04:46:32Z) - To Cool or not to Cool? Temperature Network Meets Large Foundation Models via DRO [68.69840111477367]
We present a principled framework for learning a small yet generalizable temperature prediction network (TempNet) to improve LFMs.
Our experiments on LLMs and CLIP models demonstrate that TempNet greatly improves the performance of existing solutions or models.
arXiv Detail & Related papers (2024-04-06T09:55:03Z) - Addressing Heterogeneity in Federated Load Forecasting with Personalization Layers [3.933147844455233]
We propose the use of personalization layers for load forecasting in a general framework called PL-FL.
We show that PL-FL outperforms FL and purely local training, while requiring lower communication bandwidth than FL.
arXiv Detail & Related papers (2024-04-01T22:53:09Z) - Temperature Balancing, Layer-wise Weight Analysis, and Neural Network
Training [58.20089993899729]
This paper proposes TempBalance, a straightforward yet effective layerwise learning rate method.
We show that TempBalance significantly outperforms ordinary SGD and carefully-tuned spectral norm regularization.
We also show that TempBalance outperforms a number of state-of-the-art metrics and schedulers.
arXiv Detail & Related papers (2023-12-01T05:38:17Z) - Training Deep Surrogate Models with Large Scale Online Learning [48.7576911714538]
Deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training.
It proposes an open source online training framework for deep surrogate models.
arXiv Detail & Related papers (2023-06-28T12:02:27Z) - Deep Learning-enabled MCMC for Probabilistic State Estimation in
District Heating Grids [0.0]
District heating grids are an important part of future, low-carbon energy systems.
We use Markov Chain Monte Carlo sampling in the space of network heat exchanges to estimate the posterior.
A deep neural network is trained to approximate the solution of the exact but slow non-linear solver.
arXiv Detail & Related papers (2023-05-24T08:47:01Z) - Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling
and Design [68.1682448368636]
We present a supervised pretraining approach to learn circuit representations that can be adapted to new unseen topologies or unseen prediction tasks.
To cope with the variable topological structure of different circuits we describe each circuit as a graph and use graph neural networks (GNNs) to learn node embeddings.
We show that pretraining GNNs on prediction of output node voltages can encourage learning representations that can be adapted to new unseen topologies or prediction of new circuit level properties.
arXiv Detail & Related papers (2022-03-29T21:18:47Z) - Embedded training of neural-network sub-grid-scale turbulence models [0.0]
The weights of a deep neural network model are optimized in conjunction with the governing flow equations to provide a model for sub-grid-scale stresses.
The training is by a gradient descent method, which uses the adjoint Navier-Stokes equations to provide the end-to-end sensitivities of the model weights to the velocity fields.
arXiv Detail & Related papers (2021-05-03T17:28:39Z) - Principal Component Density Estimation for Scenario Generation Using
Normalizing Flows [62.997667081978825]
We propose a dimensionality-reducing flow layer based on the linear principal component analysis (PCA) that sets up the normalizing flow in a lower-dimensional space.
We train the resulting principal component flow (PCF) on data of PV and wind power generation as well as load demand in Germany in the years 2013 to 2015.
arXiv Detail & Related papers (2021-04-21T08:42:54Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.