Neural Architecture Search For Fault Diagnosis
- URL: http://arxiv.org/abs/2002.07997v1
- Date: Wed, 19 Feb 2020 04:03:51 GMT
- Title: Neural Architecture Search For Fault Diagnosis
- Authors: Xudong Li, Yang Hu, Jianhua Zheng, Mingtao Li
- Abstract summary: Deep learning is suitable for processing big data, and has a strong feature extraction ability to realize end-to-end fault diagnosis systems.
Neural architecture search (NAS) is developing rapidly, and is becoming one of the next directions for deep learning.
In this paper, we proposed a NAS method for fault diagnosis using reinforcement learning.
- Score: 6.226564415963648
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data-driven methods have made great progress in fault diagnosis, especially
deep learning method. Deep learning is suitable for processing big data, and
has a strong feature extraction ability to realize end-to-end fault diagnosis
systems. However, designing neural network architecture requires rich
professional knowledge and debugging experience, and a lot of experiments are
needed to screen models and hyperparameters, increasing the difficulty of
developing deep learning models. Frortunately, neural architecture search (NAS)
is developing rapidly, and is becoming one of the next directions for deep
learning. In this paper, we proposed a NAS method for fault diagnosis using
reinforcement learning. A recurrent neural network is used as an agent to
generate network architecture. The accuracy of the generated network on the
validation dataset is fed back to the agent as a reward, and the parameters of
the agent are updated through the strategy gradient algorithm. We use PHM 2009
Data Challenge gearbox dataset to prove the effectiveness of proposed method,
and obtain state-of-the-art results compared with other artificial designed
network structures. To author's best knowledge, it's the first time that NAS
has been applied in fault diagnosis.
Related papers
- A Lightweight Neural Architecture Search Model for Medical Image Classification [15.244911514754547]
This paper presents ZO-DARTS+, a differentiable NAS algorithm that improves search efficiency through a novel method of generating sparse probabilities.
Experiments on five public medical datasets show that ZO-DARTS+ matches the accuracy of state-of-the-art solutions while reducing search times by up to three times.
arXiv Detail & Related papers (2024-05-06T13:33:38Z) - Image classification network enhancement methods based on knowledge
injection [8.885876832491917]
This paper proposes a multi-level hierarchical deep learning algorithm.
It is composed of multi-level hierarchical deep neural network architecture and multi-level hierarchical deep learning framework.
The experimental results show that the proposed algorithm can effectively explain the hidden information of the neural network.
arXiv Detail & Related papers (2024-01-09T09:11:41Z) - Deep Reinforcement Learning Framework for Thoracic Diseases
Classification via Prior Knowledge Guidance [49.87607548975686]
The scarcity of labeled data for related diseases poses a huge challenge to an accurate diagnosis.
We propose a novel deep reinforcement learning framework, which introduces prior knowledge to direct the learning of diagnostic agents.
Our approach's performance was demonstrated using the well-known NIHX-ray 14 and CheXpert datasets.
arXiv Detail & Related papers (2023-06-02T01:46:31Z) - Neural Architecture Search for Dense Prediction Tasks in Computer Vision [74.9839082859151]
Deep learning has led to a rising demand for neural network architecture engineering.
neural architecture search (NAS) aims at automatically designing neural network architectures in a data-driven manner rather than manually.
NAS has become applicable to a much wider range of problems in computer vision.
arXiv Detail & Related papers (2022-02-15T08:06:50Z) - Improving the sample-efficiency of neural architecture search with
reinforcement learning [0.0]
In this work, we would like to contribute to the area of Automated Machine Learning (AutoML)
Our focus is on one of the most promising research directions, reinforcement learning.
The validation accuracies of the child networks serve as a reward signal for training the controller.
We propose to modify this to a more modern and complex algorithm, PPO, which has demonstrated to be faster and more stable in other environments.
arXiv Detail & Related papers (2021-10-13T14:30:09Z) - Efficient Neural Architecture Search with Performance Prediction [0.0]
We use a neural architecture search to find the best network architecture for the task at hand.
Existing NAS algorithms generally evaluate the fitness of a new architecture by fully training from scratch.
An end-to-end offline performance predictor is proposed to accelerate the evaluation of sampled architectures.
arXiv Detail & Related papers (2021-08-04T05:44:16Z) - On the Exploitation of Neuroevolutionary Information: Analyzing the Past
for a More Efficient Future [60.99717891994599]
We propose an approach that extracts information from neuroevolutionary runs, and use it to build a metamodel.
We inspect the best structures found during neuroevolutionary searches of generative adversarial networks with varying characteristics.
arXiv Detail & Related papers (2021-05-26T20:55:29Z) - Learning Efficient, Explainable and Discriminative Representations for
Pulmonary Nodules Classification [2.4565395352560895]
In this work, we aim to build an efficient and (partially) explainable classification model.
We use emphneural architecture search (NAS) to automatically search 3D network architectures with excellent accuracy/speed trade-off.
In the inference stage, we employ an ensemble of diverse neural networks to improve the prediction accuracy and robustness.
arXiv Detail & Related papers (2021-01-19T02:53:44Z) - NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural
Network Synthesis [53.106414896248246]
We present a framework that allows analysts to effectively build the solution sub-graph space and guide the network search by injecting their domain knowledge.
Applying this technique in an iterative manner allows analysts to converge to the best performing neural network architecture for a given application.
arXiv Detail & Related papers (2020-09-28T01:48:45Z) - Multi-fidelity Neural Architecture Search with Knowledge Distillation [69.09782590880367]
We propose a bayesian multi-fidelity method for neural architecture search: MF-KD.
Knowledge distillation adds to a loss function a term forcing a network to mimic some teacher network.
We show that training for a few epochs with such a modified loss function leads to a better selection of neural architectures than training for a few epochs with a logistic loss.
arXiv Detail & Related papers (2020-06-15T12:32:38Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.