Application of Neural Network Algorithm in Propylene Distillation
- URL: http://arxiv.org/abs/2104.01774v1
- Date: Mon, 5 Apr 2021 05:06:29 GMT
- Title: Application of Neural Network Algorithm in Propylene Distillation
- Authors: Jinwei Lu, Ningrui Zhao
- Abstract summary: This article introduces the neural network model and its application in the propylene distillation tower.
The functional relationship between the product concentration at the top and bottom of the tower and the process parameters is extremely complex.
Accurate measurement of them plays a key role in increasing propylene yield in ethylene production enterprises.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial neural network modeling does not need to consider the mechanism.
It can map the implicit relationship between input and output and predict the
performance of the system well. At the same time, it has the advantages of
self-learning ability and high fault tolerance. The gas-liquid two phases in
the rectification tower conduct interphase heat and mass transfer through
countercurrent contact. The functional relationship between the product
concentration at the top and bottom of the tower and the process parameters is
extremely complex. The functional relationship can be accurately controlled by
artificial neural network algorithms. The key components of the propylene
distillation tower are the propane concentration at the top of the tower and
the propylene concentration at the bottom of the tower. Accurate measurement of
them plays a key role in increasing propylene yield in ethylene production
enterprises. This article mainly introduces the neural network model and its
application in the propylene distillation tower.
Related papers
- Fluctuation based interpretable analysis scheme for quantum many-body
snapshots [0.0]
Microscopically understanding and classifying phases of matter is at the heart of strongly-correlated quantum physics.
Here, we combine confusion learning with correlation convolutional neural networks, which yields fully interpretable phase detection.
Our work opens new directions in interpretable quantum image processing being sensible to long-range order.
arXiv Detail & Related papers (2023-04-12T17:59:59Z) - Leveraging Angular Distributions for Improved Knowledge Distillation [4.751886527142779]
In computer vision applications, it is seen that the feature activation learned by a higher capacity model contains richer knowledge, highlighting complete objects while focusing less on the background.
We propose a new loss function for distillation, called angular margin-based distillation (AMD) loss.
We show that the proposed method has advantages in compatibility with other learning techniques, such as using fine-grained features, augmentation, and other distillation methods.
arXiv Detail & Related papers (2023-02-27T20:34:30Z) - Dilute neutron star matter from neural-network quantum states [58.720142291102135]
Low-density neutron matter is characterized by the formation of Cooper pairs and the onset of superfluidity.
We model this density regime by capitalizing on the expressivity of the hidden-nucleon neural-network quantum states combined with variational Monte Carlo and reconfiguration techniques.
arXiv Detail & Related papers (2022-12-08T17:55:25Z) - Directed Acyclic Graph Factorization Machines for CTR Prediction via
Knowledge Distillation [65.62538699160085]
We propose a Directed Acyclic Graph Factorization Machine (KD-DAGFM) to learn the high-order feature interactions from existing complex interaction models for CTR prediction via Knowledge Distillation.
KD-DAGFM achieves the best performance with less than 21.5% FLOPs of the state-of-the-art method on both online and offline experiments.
arXiv Detail & Related papers (2022-11-21T03:09:42Z) - Transformers with Learnable Activation Functions [63.98696070245065]
We use Rational Activation Function (RAF) to learn optimal activation functions during training according to input data.
RAF opens a new research direction for analyzing and interpreting pre-trained models according to the learned activation functions.
arXiv Detail & Related papers (2022-08-30T09:47:31Z) - Gate-based spin readout of hole quantum dots with site-dependent
$g-$factors [101.23523361398418]
We experimentally investigate a hole double quantum dot in silicon by carrying out spin readout with gate-based reflectometry.
We show that characteristic features in the reflected phase signal arising from magneto-spectroscopy convey information on site-dependent $g-$factors in the two dots.
arXiv Detail & Related papers (2022-06-27T09:07:20Z) - Even your Teacher Needs Guidance: Ground-Truth Targets Dampen
Regularization Imposed by Self-Distillation [0.0]
Self-distillation, where the network architectures are identical, has been observed to improve generalization accuracy.
We consider an iterative variant of self-distillation in a kernel regression setting, in which successive steps incorporate both model outputs and the ground-truth targets.
We show that any such function obtained with self-distillation can be calculated directly as a function of the initial fit, and that infinite distillation steps yields the same optimization problem as the original with amplified regularization.
arXiv Detail & Related papers (2021-02-25T18:56:09Z) - Neural network algorithm and its application in temperature control of
distillation tower [0.0]
This article briefly describes the basic concepts and research progress of neural network and distillation tower temperature control.
It systematically summarizes the application of neural network in distillation tower control, aiming to provide reference for the development of related industries.
arXiv Detail & Related papers (2021-01-03T08:33:05Z) - Weight Distillation: Transferring the Knowledge in Neural Network
Parameters [48.32204633079697]
We propose Weight Distillation to transfer the knowledge in the large network parameters through a parameter generator.
Experiments on WMT16 En-Ro, NIST12 Zh-En, and WMT14 En-De machine translation tasks show that weight distillation can train a small network that is 1.882.94x faster than the large network but with competitive performance.
arXiv Detail & Related papers (2020-09-19T03:23:26Z) - Phases of two-dimensional spinless lattice fermions with first-quantized
deep neural-network quantum states [3.427639528860287]
First-quantized deep neural network techniques are developed for analyzing strongly coupled fermionic systems on the lattice.
We use a Slater-Jastrow inspired ansatz which exploits deep residual networks with convolutional residual blocks.
The flexibility of the neural-network ansatz results in a high level of accuracy when compared to exact diagonalization results on small systems.
arXiv Detail & Related papers (2020-07-31T23:43:52Z) - Variational Monte Carlo calculations of $\mathbf{A\leq 4}$ nuclei with
an artificial neural-network correlator ansatz [62.997667081978825]
We introduce a neural-network quantum state ansatz to model the ground-state wave function of light nuclei.
We compute the binding energies and point-nucleon densities of $Aleq 4$ nuclei as emerging from a leading-order pionless effective field theory Hamiltonian.
arXiv Detail & Related papers (2020-07-28T14:52:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.