A review of neural network algorithms and their applications in
supercritical extraction
- URL: http://arxiv.org/abs/2011.05279v1
- Date: Sat, 31 Oct 2020 01:51:02 GMT
- Title: A review of neural network algorithms and their applications in
supercritical extraction
- Authors: Yu Qi, Zhaolan Zheng
- Abstract summary: This paper briefly describes the basic concepts and research progress of neural networks and supercritical extraction.
It aims to provide reference for the development and innovation of industry technology.
- Score: 5.455337487096457
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural network realizes multi-parameter optimization and control by
simulating certain mechanisms of the human brain. It can be used in many fields
such as signal processing, intelligent driving, optimal combination, vehicle
abnormality detection, and chemical process optimization control. Supercritical
extraction is a new type of high-efficiency chemical separation process, which
is mainly used in the separation and purification of natural substances. There
are many influencing factors. The neural network model can quickly optimize the
process parameters and predict the experimental results under different process
conditions. It is helpful to understand the inner law of the experiment and
determine the optimal experimental conditions. This paper briefly describes the
basic concepts and research progress of neural networks and supercritical
extraction, and summarizes the application of neural network algorithms in
supercritical extraction, aiming to provide reference for the development and
innovation of industry technology.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Design and development of opto-neural processors for simulation of
neural networks trained in image detection for potential implementation in
hybrid robotics [0.0]
Living neural networks offer advantages of lower power consumption, faster processing, and biological realism.
This work proposes a simulated living neural network trained indirectly by backpropagating STDP based algorithms using precision activation by optogenetics.
arXiv Detail & Related papers (2024-01-17T04:42:49Z) - Towards Optimal Neural Networks: the Role of Sample Splitting in
Hyperparameter Selection [10.083181657981292]
We construct a novel theory for understanding the effectiveness of neural networks.
Specifically, we explore the rationale underlying a common practice during the construction of neural network models.
arXiv Detail & Related papers (2023-07-15T06:46:40Z) - Machine learning enabled experimental design and parameter estimation
for ultrafast spin dynamics [54.172707311728885]
We introduce a methodology that combines machine learning with Bayesian optimal experimental design (BOED)
Our method employs a neural network model for large-scale spin dynamics simulations for precise distribution and utility calculations in BOED.
Our numerical benchmarks demonstrate the superior performance of our method in guiding XPFS experiments, predicting model parameters, and yielding more informative measurements within limited experimental time.
arXiv Detail & Related papers (2023-06-03T06:19:20Z) - Towards Theoretically Inspired Neural Initialization Optimization [66.04735385415427]
We propose a differentiable quantity, named GradCosine, with theoretical insights to evaluate the initial state of a neural network.
We show that both the training and test performance of a network can be improved by maximizing GradCosine under norm constraint.
Generalized from the sample-wise analysis into the real batch setting, NIO is able to automatically look for a better initialization with negligible cost.
arXiv Detail & Related papers (2022-10-12T06:49:16Z) - Artificial Neural Network and its Application Research Progress in
Distillation [3.2484467083803583]
Artificial neural networks learn various rules and algorithms to form different ways of processing information.
This article gives a basic overview of artificial neural networks, and introduces the application research of artificial neural networks in distillation at home and abroad.
arXiv Detail & Related papers (2021-10-01T06:25:53Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - Neural network algorithm and its application in reactive distillation [3.7692411550925673]
Reactive distillation is based on the coupling of chemical reaction and distillation.
The control and optimization of the reactive distillation process must rely on neural network algorithms.
This paper briefly describes the characteristics and research progress of reactive distillation technology and neural network algorithms.
arXiv Detail & Related papers (2020-11-16T02:18:52Z) - Factorized Neural Processes for Neural Processes: $K$-Shot Prediction of
Neural Responses [9.792408261365043]
We develop a Factorized Neural Process to infer a neuron's tuning function from a small set of stimulus-response pairs.
We show on simulated responses that the predictions and reconstructed receptive fields from the Neural Process approach ground truth with increasing number of trials.
We believe this novel deep learning systems identification framework will facilitate better real-time integration of artificial neural network modeling into neuroscience experiments.
arXiv Detail & Related papers (2020-10-22T15:43:59Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Parallelization Techniques for Verifying Neural Networks [52.917845265248744]
We introduce an algorithm based on the verification problem in an iterative manner and explore two partitioning strategies.
We also introduce a highly parallelizable pre-processing algorithm that uses the neuron activation phases to simplify the neural network verification problems.
arXiv Detail & Related papers (2020-04-17T20:21:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.