Simple Full-Spectrum Correlated k-Distribution Model based on Multilayer Perceptron
- URL: http://arxiv.org/abs/2403.12993v1
- Date: Tue, 5 Mar 2024 08:04:01 GMT
- Title: Simple Full-Spectrum Correlated k-Distribution Model based on Multilayer Perceptron
- Authors: Xin Wang, Yucheng Kuang, Chaojun Wang, Hongyuan Di, Boshu He,
- Abstract summary: The simple FSCK (SFM) model is developed to compensate among accuracy, efficiency and storage.
Several test cases have been carried out to compare the developed SFM model and other FSCK tools including look-up tables and traditional FSCK (TFM) model.
Results show that the SFM model can achieve excellent accuracy that is even better than look-up tables at a tiny computational cost.
- Score: 6.354085763851961
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While neural networks have been successfully applied to the full-spectrum k-distribution (FSCK) method at a large range of thermodynamics with k-values predicted by a trained multilayer perceptron (MLP) model, the required a-values still need to be calculated on-the-fly, which theoretically degrades the FSCK method and may lead to errors. On the other hand, too complicated structure of the current MLP model inevitably slows down the calculation efficiency. Therefore, to compensate among accuracy, efficiency and storage, the simple MLP designed based on the nature of FSCK method are developed, i.e., the simple FSCK MLP (SFM) model, from which those correlated k-values and corresponding ka-values can be efficiently obtained. Several test cases have been carried out to compare the developed SFM model and other FSCK tools including look-up tables and traditional FSCK MLP (TFM) model. Results show that the SFM model can achieve excellent accuracy that is even better than look-up tables at a tiny computational cost that is far less than that of TFM model. Considering accuracy, efficiency and portability, the SFM model is not only an excellent tool for the prediction of spectral properties, but also provides a method to reduce the errors due to nonlinear effects.
Related papers
- Machine Learning for Improved Density Functional Theory Thermodynamics [0.0]
We present a machine learning (ML) approach to systematically correct intrinsic energy resolution errors in density functional theory calculations.
A neural network model has been trained to predict the discrepancy between DFT-calculated and experimentally measured enthalpies for binary and ternary alloys and compounds.
We illustrate the effectiveness of this method by applying it to the Al-Ni-Pd and Al-Ni-Ti systems, which are of interest for high-temperature applications in aerospace and protective coatings.
arXiv Detail & Related papers (2025-03-07T15:46:30Z) - An Efficient Hierarchical Preconditioner-Learner Architecture for Reconstructing Multi-scale Basis Functions of High-dimensional Subsurface Fluid Flow [4.303037819686676]
We present an efficient hierarchical preconditioner-learner architecture that reconstructs multi-scale basis functions of high-dimensional subsurface fluid flow.
FP-HMsNet achieved an MSE of 0.0036, an MAE of 0.0375, and an R2 of 0.9716 on the testing set, significantly outperforming existing models.
This model offers a novel method for efficient and accurate subsurface fluid flow modeling, with promising potential for more complex real-world applications.
arXiv Detail & Related papers (2024-11-01T09:17:08Z) - PO-MSCKF: An Efficient Visual-Inertial Odometry by Reconstructing the Multi-State Constrained Kalman Filter with the Pose-only Theory [0.0]
Visual-Inertial Odometry (VIO) is crucial for payload-constrained robots.
We propose to reconstruct the MSCKF VIO with the novel Pose-Only (PO) multi-view geometry description.
New filter does not require any feature position information, which removes the computational cost and linearization errors.
arXiv Detail & Related papers (2024-07-02T02:18:35Z) - Adaptive Fuzzy C-Means with Graph Embedding [84.47075244116782]
Fuzzy clustering algorithms can be roughly categorized into two main groups: Fuzzy C-Means (FCM) based methods and mixture model based methods.
We propose a novel FCM based clustering model that is capable of automatically learning an appropriate membership degree hyper- parameter value.
arXiv Detail & Related papers (2024-05-22T08:15:50Z) - Improving and generalizing flow-based generative models with minibatch
optimal transport [90.01613198337833]
We introduce the generalized conditional flow matching (CFM) technique for continuous normalizing flows (CNFs)
CFM features a stable regression objective like that used to train the flow in diffusion models but enjoys the efficient inference of deterministic flow models.
A variant of our objective is optimal transport CFM (OT-CFM), which creates simpler flows that are more stable to train and lead to faster inference.
arXiv Detail & Related papers (2023-02-01T14:47:17Z) - Sparse MoEs meet Efficient Ensembles [49.313497379189315]
We study the interplay of two popular classes of such models: ensembles of neural networks and sparse mixture of experts (sparse MoEs)
We present Efficient Ensemble of Experts (E$3$), a scalable and simple ensemble of sparse MoEs that takes the best of both classes of models, while using up to 45% fewer FLOPs than a deep ensemble.
arXiv Detail & Related papers (2021-10-07T11:58:35Z) - MoEfication: Conditional Computation of Transformer Models for Efficient
Inference [66.56994436947441]
Transformer-based pre-trained language models can achieve superior performance on most NLP tasks due to large parameter capacity, but also lead to huge computation cost.
We explore to accelerate large-model inference by conditional computation based on the sparse activation phenomenon.
We propose to transform a large model into its mixture-of-experts (MoE) version with equal model size, namely MoEfication.
arXiv Detail & Related papers (2021-10-05T02:14:38Z) - A Data-driven feature selection and machine-learning model benchmark for
the prediction of longitudinal dispersion coefficient [29.58577229101903]
An accurate prediction on Longitudinal Dispersion(LD) coefficient can produce a performance leap in related simulation.
In this study, a global optimal feature set was proposed through numerical comparison of the distilled local optimums in performance with representative ML models.
Results show that the support vector machine has significantly better performance than other models.
arXiv Detail & Related papers (2021-07-16T09:50:38Z) - Active Learning with Multifidelity Modeling for Efficient Rare Event
Simulation [0.0]
We propose a framework for active learning with multifidelity modeling emphasizing the efficient estimation of rare events.
Our framework works by fusing a low-fidelity (LF) prediction with an HF-inferred correction, filtering the corrected LF prediction to decide whether to call the high-fidelity model.
For improved robustness when estimating smaller failure probabilities, we propose using dynamic active learning functions that decide when to call the HF model.
arXiv Detail & Related papers (2021-06-25T17:44:28Z) - Learning representations with end-to-end models for improved remaining
useful life prognostics [64.80885001058572]
The remaining Useful Life (RUL) of equipment is defined as the duration between the current time and its failure.
We propose an end-to-end deep learning model based on multi-layer perceptron and long short-term memory layers (LSTM) to predict the RUL.
We will discuss how the proposed end-to-end model is able to achieve such good results and compare it to other deep learning and state-of-the-art methods.
arXiv Detail & Related papers (2021-04-11T16:45:18Z) - Blending MPC & Value Function Approximation for Efficient Reinforcement
Learning [42.429730406277315]
Model-Predictive Control (MPC) is a powerful tool for controlling complex, real-world systems.
We present a framework for improving on MPC with model-free reinforcement learning (RL)
We show that our approach can obtain performance comparable with MPC with access to true dynamics.
arXiv Detail & Related papers (2020-12-10T11:32:01Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.