Study on the simulation control of neural network algorithm in thermally
coupled distillation
- URL: http://arxiv.org/abs/2102.03506v1
- Date: Sat, 6 Feb 2021 04:18:04 GMT
- Title: Study on the simulation control of neural network algorithm in thermally
coupled distillation
- Authors: ZhaoLan Zheng, Yu Qi
- Abstract summary: The neural network algorithm has the advantages of fast learning and can approach nonlinear functions arbitrarily.
This article summarizes the research progress of artificial neural network and the application of neural network in thermally coupled distillation.
- Score: 7.313669465917949
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Thermally coupled distillation is a new energy-saving method, but the
traditional thermally coupled distillation simulation calculation process is
complicated, and the optimization method based on the traditional simulation
process is difficult to obtain a good feasible solution. The neural network
algorithm has the advantages of fast learning and can approach nonlinear
functions arbitrarily. For the problems in complex process control systems,
neural network control does not require cumbersome control structures or
precise mathematical models. When training the network, only the input and
output samples it needs are given, so that the dynamics of the system can be
controlled. Performance is approaching. This method can effectively solve the
mathematical model of the thermally coupled distillation process, and quickly
obtain the solution of the optimized variables and the objective function. This
article summarizes the research progress of artificial neural network and the
optimization control of thermally coupled distillation and the application of
neural network in thermally coupled distillation.
Related papers
- Optimizing Temperature Distributions for Training Neural Quantum States using Parallel Tempering [0.0]
We show that temperature optimization can significantly increase the success rate of variational algorithms.
We demonstrate this using two different neural networks, a restricted Boltzmann machine and a feedforward network.
arXiv Detail & Related papers (2024-10-30T13:48:35Z) - Gradient-free online learning of subgrid-scale dynamics with neural emulators [5.283819482083864]
We propose a generic algorithm to train machine learning-based subgrid parametrizations online.
We are able to train a parametrization that recovers most of the benefits of online strategies without having to compute the gradient of the original solver.
arXiv Detail & Related papers (2023-10-30T09:46:35Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Emulation Learning for Neuromimetic Systems [0.0]
Building on our recent research on neural quantization systems, results on learning quantized motions and resilience to channel dropouts are reported.
We propose a general Deep Q Network (DQN) algorithm that can not only learn the trajectory but also exhibit the advantages of resilience to channel dropout.
arXiv Detail & Related papers (2023-05-04T22:47:39Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - Neural network algorithm and its application in temperature control of
distillation tower [0.0]
This article briefly describes the basic concepts and research progress of neural network and distillation tower temperature control.
It systematically summarizes the application of neural network in distillation tower control, aiming to provide reference for the development of related industries.
arXiv Detail & Related papers (2021-01-03T08:33:05Z) - A review of neural network algorithms and their applications in
supercritical extraction [5.455337487096457]
This paper briefly describes the basic concepts and research progress of neural networks and supercritical extraction.
It aims to provide reference for the development and innovation of industry technology.
arXiv Detail & Related papers (2020-10-31T01:51:02Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Parallelization Techniques for Verifying Neural Networks [52.917845265248744]
We introduce an algorithm based on the verification problem in an iterative manner and explore two partitioning strategies.
We also introduce a highly parallelizable pre-processing algorithm that uses the neuron activation phases to simplify the neural network verification problems.
arXiv Detail & Related papers (2020-04-17T20:21:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.