Three-layer deep learning network random trees for fault detection in chemical production process
- URL: http://arxiv.org/abs/2405.00311v3
- Date: Sat, 10 Aug 2024 08:16:49 GMT
- Title: Three-layer deep learning network random trees for fault detection in chemical production process
- Authors: Ming Lu, Zhen Gao, Ying Zou, Zuguo Chen, Pei Li,
- Abstract summary: We propose a novel fault detection model named three-layer deep learning network random trees (TDLN-trees)
First, the deep learning component extracts temporal features from industrial data, combining and transforming them into a higher-level data representation.
Second, the machine learning component processes and classifies the features extracted in the first step.
- Score: 13.996134219601513
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the development of technology, the chemical production process is becoming increasingly complex and large-scale, making fault detection particularly important. However, current detective methods struggle to address the complexities of large-scale production processes. In this paper, we integrate the strengths of deep learning and machine learning technologies, combining the advantages of bidirectional long and short-term memory neural networks, fully connected neural networks, and the extra trees algorithm to propose a novel fault detection model named three-layer deep learning network random trees (TDLN-trees). First, the deep learning component extracts temporal features from industrial data, combining and transforming them into a higher-level data representation. Second, the machine learning component processes and classifies the features extracted in the first step. An experimental analysis based on the Tennessee Eastman process verifies the superiority of the proposed method.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Homological Neural Networks: A Sparse Architecture for Multivariate
Complexity [0.0]
We develop a novel deep neural network unit characterized by a sparse higher-order graphical architecture built over the homological structure of underlying data.
Results demonstrate the advantages of this novel design which can tie or overcome the results of state-of-the-art machine learning and deep learning models using only a fraction of parameters.
arXiv Detail & Related papers (2023-06-27T09:46:16Z) - MixedTeacher : Knowledge Distillation for fast inference textural
anomaly detection [4.243356707599485]
unsupervised learning for anomaly detection has been at the heart of image processing research.
We propose a new method based on the promising concept of knowledge distillation.
The proposed texture anomaly detector has an outstanding capability to detect defects in any texture and a fast inference time compared to the SOTA methods.
arXiv Detail & Related papers (2023-06-16T14:14:20Z) - Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural
Networks [49.808194368781095]
We show that three-layer neural networks have provably richer feature learning capabilities than two-layer networks.
This work makes progress towards understanding the provable benefit of three-layer neural networks over two-layer networks in the feature learning regime.
arXiv Detail & Related papers (2023-05-11T17:19:30Z) - Deep learning applied to computational mechanics: A comprehensive
review, state of the art, and the classics [77.34726150561087]
Recent developments in artificial neural networks, particularly deep learning (DL), are reviewed in detail.
Both hybrid and pure machine learning (ML) methods are discussed.
History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics.
arXiv Detail & Related papers (2022-12-18T02:03:00Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Artificial Neural Network and its Application Research Progress in
Distillation [3.2484467083803583]
Artificial neural networks learn various rules and algorithms to form different ways of processing information.
This article gives a basic overview of artificial neural networks, and introduces the application research of artificial neural networks in distillation at home and abroad.
arXiv Detail & Related papers (2021-10-01T06:25:53Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Parallelization Techniques for Verifying Neural Networks [52.917845265248744]
We introduce an algorithm based on the verification problem in an iterative manner and explore two partitioning strategies.
We also introduce a highly parallelizable pre-processing algorithm that uses the neuron activation phases to simplify the neural network verification problems.
arXiv Detail & Related papers (2020-04-17T20:21:47Z) - Exploring the Connection Between Binary and Spiking Neural Networks [1.329054857829016]
We bridge the recent algorithmic progress in training Binary Neural Networks and Spiking Neural Networks.
We show that training Spiking Neural Networks in the extreme quantization regime results in near full precision accuracies on large-scale datasets.
arXiv Detail & Related papers (2020-02-24T03:46:51Z) - A Novel Visual Fault Detection and Classification System for Semiconductor Manufacturing Using Stacked Hybrid Convolutional Neural Networks [0.24999074238880484]
Automated visual inspection in the semiconductor industry aims to detect and classify manufacturing defects.
This contribution introduces a novel deep neural network based hybrid approach.
Consisting of stacked hybrid convolutional neural networks (SH-CNN) and inspired by current approaches of visual attention, the realized system draws the focus over the level of detail from its structures.
arXiv Detail & Related papers (2019-11-25T21:58:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.