Low-Dimensional High-Fidelity Kinetic Models for NOX Formation by a
Compute Intensification Method
- URL: http://arxiv.org/abs/2202.10194v1
- Date: Mon, 21 Feb 2022 13:08:01 GMT
- Title: Low-Dimensional High-Fidelity Kinetic Models for NOX Formation by a
Compute Intensification Method
- Authors: Mark Kelly, Harry Dunne, Gilles Bourque, Stephen Dooley
- Abstract summary: The method adapts the data intensive Machine Learned Optimization of Chemical Kinetics algorithm for compact model generation.
A set of logical rules are defined which construct a minimally sized virtual reaction network comprising three additional nodes (N, NO, NO2)
The resulting eighteen node virtual reaction network is processed by the MLOCK coded algorithm to produce a plethora of compact model candidates for NOX formation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: A novel compute intensification methodology to the construction of
low-dimensional, high-fidelity "compact" kinetic models for NOX formation is
designed and demonstrated. The method adapts the data intensive Machine Learned
Optimization of Chemical Kinetics (MLOCK) algorithm for compact model
generation by the use of a Latin Square method for virtual reaction network
generation. A set of logical rules are defined which construct a minimally
sized virtual reaction network comprising three additional nodes (N, NO, NO2).
This NOX virtual reaction network is appended to a pre-existing compact model
for methane combustion comprising fifteen nodes.
The resulting eighteen node virtual reaction network is processed by the
MLOCK coded algorithm to produce a plethora of compact model candidates for NOX
formation during methane combustion. MLOCK automatically; populates the terms
of the virtual reaction network with candidate inputs; measures the success of
the resulting compact model candidates (in reproducing a broad set of gas
turbine industry-defined performance targets); selects regions of input
parameters space showing models of best performance; refines the input
parameters to give better performance; and makes an ultimate selection of the
best performing model or models.
By this method, it is shown that a number of compact model candidates exist
that show fidelities in excess of 75% in reproducing industry defined
performance targets, with one model valid to >75% across fuel/air equivalence
ratios of 0.5-1.0. However, to meet the full fuel/air equivalence ratio
performance envelope defined by industry, we show that with this minimal
virtual reaction network, two further compact models are required.
Related papers
- Energy-Based Diffusion Language Models for Text Generation [126.23425882687195]
Energy-based Diffusion Language Model (EDLM) is an energy-based model operating at the full sequence level for each diffusion step.
Our framework offers a 1.3$times$ sampling speedup over existing diffusion models.
arXiv Detail & Related papers (2024-10-28T17:25:56Z) - Flow Generator Matching [35.371071097381346]
Flow Generator Matching (FGM) is designed to accelerate the sampling of flow-matching models into a one-step generation.
On the CIFAR10 unconditional generation benchmark, our one-step FGM model achieves a new record Fr'echet Inception Distance (FID) score of 3.08.
MM-DiT-FGM one-step text-to-image model demonstrates outstanding industry-level performance.
arXiv Detail & Related papers (2024-10-25T05:41:28Z) - Towards Robust and Efficient Cloud-Edge Elastic Model Adaptation via Selective Entropy Distillation [56.79064699832383]
We establish a Cloud-Edge Elastic Model Adaptation (CEMA) paradigm in which the edge models only need to perform forward propagation.
In our CEMA, to reduce the communication burden, we devise two criteria to exclude unnecessary samples from uploading to the cloud.
arXiv Detail & Related papers (2024-02-27T08:47:19Z) - A-SDM: Accelerating Stable Diffusion through Redundancy Removal and
Performance Optimization [54.113083217869516]
In this work, we first explore the computational redundancy part of the network.
We then prune the redundancy blocks of the model and maintain the network performance.
Thirdly, we propose a global-regional interactive (GRI) attention to speed up the computationally intensive attention part.
arXiv Detail & Related papers (2023-12-24T15:37:47Z) - Entropic Score metric: Decoupling Topology and Size in Training-free NAS [18.804303642485895]
This paper contributes with a novel training-free metric, named Entropic Score, to estimate model expressivity through the aggregated element-wise entropy of its activations.
A proper combination with LogSynflow, to search for model size, yields superior capability to completely design high-performance Hybrid Transformers for edge applications in less than 1 GPU hour.
arXiv Detail & Related papers (2023-10-06T11:49:21Z) - Precision-Recall Divergence Optimization for Generative Modeling with
GANs and Normalizing Flows [54.050498411883495]
We develop a novel training method for generative models, such as Generative Adversarial Networks and Normalizing Flows.
We show that achieving a specified precision-recall trade-off corresponds to minimizing a unique $f$-divergence from a family we call the textitPR-divergences.
Our approach improves the performance of existing state-of-the-art models like BigGAN in terms of either precision or recall when tested on datasets such as ImageNet.
arXiv Detail & Related papers (2023-05-30T10:07:17Z) - A Generative Approach for Production-Aware Industrial Network Traffic
Modeling [70.46446906513677]
We investigate the network traffic data generated from a laser cutting machine deployed in a Trumpf factory in Germany.
We analyze the traffic statistics, capture the dependencies between the internal states of the machine, and model the network traffic as a production state dependent process.
We compare the performance of various generative models including variational autoencoder (VAE), conditional variational autoencoder (CVAE), and generative adversarial network (GAN)
arXiv Detail & Related papers (2022-11-11T09:46:58Z) - Toward Development of Machine Learned Techniques for Production of
Compact Kinetic Models [0.0]
Chemical kinetic models are an essential component in the development and optimisation of combustion devices.
We present a novel automated compute intensification methodology to produce overly-reduced and optimised chemical kinetic models.
arXiv Detail & Related papers (2022-02-16T12:31:24Z) - Rate Distortion Characteristic Modeling for Neural Image Compression [59.25700168404325]
End-to-end optimization capability offers neural image compression (NIC) superior lossy compression performance.
distinct models are required to be trained to reach different points in the rate-distortion (R-D) space.
We make efforts to formulate the essential mathematical functions to describe the R-D behavior of NIC using deep network and statistical modeling.
arXiv Detail & Related papers (2021-06-24T12:23:05Z) - Toward Machine Learned Highly Reduce Kinetic Models For Methane/Air
Combustion [0.0]
Kinetic models are used to test the effect of operating conditions, fuel composition and combustor design compared to physical experiments.
We propose a novel data orientated three-step methodology to produce compact models that replicate a target set of detailed model properties to a high fidelity.
The strategy is demonstrated through the production of a 19 species and a 15 species compact model for methane/air combustion.
arXiv Detail & Related papers (2021-03-15T13:29:08Z) - Maximum Entropy Model Rollouts: Fast Model Based Policy Optimization
without Compounding Errors [10.906666680425754]
We propose a Dyna-style model-based reinforcement learning algorithm, which we called Maximum Entropy Model Rollouts (MEMR)
To eliminate the compounding errors, we only use our model to generate single-step rollouts.
arXiv Detail & Related papers (2020-06-08T21:38:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.