Graph Neural Networks for Leveraging Industrial Equipment Structure: An
application to Remaining Useful Life Estimation
- URL: http://arxiv.org/abs/2006.16556v1
- Date: Tue, 30 Jun 2020 06:38:08 GMT
- Title: Graph Neural Networks for Leveraging Industrial Equipment Structure: An
application to Remaining Useful Life Estimation
- Authors: Jyoti Narwariya, Pankaj Malhotra, Vishnu TV, Lovekesh Vig, Gautam
Shroff
- Abstract summary: We propose to capture the structure of a complex equipment in the form of a graph, and use graph neural networks (GNNs) to model multi-sensor time-series data.
We observe that the proposed GNN-based RUL estimation model compares favorably to several strong baselines from literature such as those based on RNNs and CNNs.
- Score: 21.297461316329453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated equipment health monitoring from streaming multisensor time-series
data can be used to enable condition-based maintenance, avoid sudden
catastrophic failures, and ensure high operational availability. We note that
most complex machinery has a well-documented and readily accessible underlying
structure capturing the inter-dependencies between sub-systems or modules. Deep
learning models such as those based on recurrent neural networks (RNNs) or
convolutional neural networks (CNNs) fail to explicitly leverage this
potentially rich source of domain-knowledge into the learning procedure. In
this work, we propose to capture the structure of a complex equipment in the
form of a graph, and use graph neural networks (GNNs) to model multi-sensor
time-series data. Using remaining useful life estimation as an application
task, we evaluate the advantage of incorporating the graph structure via GNNs
on the publicly available turbofan engine benchmark dataset. We observe that
the proposed GNN-based RUL estimation model compares favorably to several
strong baselines from literature such as those based on RNNs and CNNs.
Additionally, we observe that the learned network is able to focus on the
module (node) with impending failure through a simple attention mechanism,
potentially paving the way for actionable diagnosis.
Related papers
- EvSegSNN: Neuromorphic Semantic Segmentation for Event Data [0.6138671548064356]
EvSegSNN is a biologically plausible encoder-decoder U-shaped architecture relying on Parametric Leaky Integrate and Fire neurons.
We introduce an end-to-end biologically inspired semantic segmentation approach by combining Spiking Neural Networks with event cameras.
Experiments conducted on DDD17 demonstrate that EvSegSNN outperforms the closest state-of-the-art model in terms of MIoU.
arXiv Detail & Related papers (2024-06-20T10:36:24Z) - A Generative Self-Supervised Framework using Functional Connectivity in
fMRI Data [15.211387244155725]
Deep neural networks trained on Functional Connectivity (FC) networks extracted from functional Magnetic Resonance Imaging (fMRI) data have gained popularity.
Recent research on the application of Graph Neural Network (GNN) to FC suggests that exploiting the time-varying properties of the FC could significantly improve the accuracy and interpretability of the model prediction.
High cost of acquiring high-quality fMRI data and corresponding labels poses a hurdle to their application in real-world settings.
We propose a generative SSL approach that is tailored to effectively harnesstemporal information within dynamic FC.
arXiv Detail & Related papers (2023-12-04T16:14:43Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Explicit Context Integrated Recurrent Neural Network for Sensor Data
Applications [0.0]
Context Integrated RNN (CiRNN) enables integrating explicit contexts represented in the form of contextual features.
Experiments show an improvement of 39% and 87% respectively, over state-of-the-art models.
arXiv Detail & Related papers (2023-01-12T13:58:56Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Graph Neural Networks in Particle Physics: Implementations, Innovations,
and Challenges [7.071890461446324]
We present a range of capabilities that are currently being well-adopted in HEP communities, and which are still immature.
With the wide-spread adoption of GNNs in industry, the HEP community is well-placed to benefit from rapid improvements in GNN latency and memory usage.
We hope to capture the landscape of graph techniques in machine learning as well as point out the most significant gaps that are inhibiting potentially large leaps in research.
arXiv Detail & Related papers (2022-03-23T04:36:04Z) - EIGNN: Efficient Infinite-Depth Graph Neural Networks [51.97361378423152]
Graph neural networks (GNNs) are widely used for modelling graph-structured data in numerous applications.
Motivated by this limitation, we propose a GNN model with infinite depth, which we call Efficient Infinite-Depth Graph Neural Networks (EIGNN)
We show that EIGNN has a better ability to capture long-range dependencies than recent baselines, and consistently achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-02-22T08:16:58Z) - CondenseNeXt: An Ultra-Efficient Deep Neural Network for Embedded
Systems [0.0]
A Convolutional Neural Network (CNN) is a class of Deep Neural Network (DNN) widely used in the analysis of visual images captured by an image sensor.
In this paper, we propose a neoteric variant of deep convolutional neural network architecture to ameliorate the performance of existing CNN architectures for real-time inference on embedded systems.
arXiv Detail & Related papers (2021-12-01T18:20:52Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Learning to Execute Programs with Instruction Pointer Attention Graph
Neural Networks [55.98291376393561]
Graph neural networks (GNNs) have emerged as a powerful tool for learning software engineering tasks.
Recurrent neural networks (RNNs) are well-suited to long sequential chains of reasoning, but they do not naturally incorporate program structure.
We introduce a novel GNN architecture, the Instruction Pointer Attention Graph Neural Networks (IPA-GNN), which improves systematic generalization on the task of learning to execute programs.
arXiv Detail & Related papers (2020-10-23T19:12:30Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.