Embedding Graph Convolutional Networks in Recurrent Neural Networks for
Predictive Monitoring
- URL: http://arxiv.org/abs/2112.09641v1
- Date: Fri, 17 Dec 2021 17:30:30 GMT
- Title: Embedding Graph Convolutional Networks in Recurrent Neural Networks for
Predictive Monitoring
- Authors: Efr\'en Rama-Maneiro, Juan C. Vidal, Manuel Lama
- Abstract summary: This paper proposes an approach based on graph convolutional networks and recurrent neural networks.
An experimental evaluation on real-life event logs shows that our approach is more consistent and outperforms the current state-of-the-art approaches.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Predictive monitoring of business processes is a subfield of process mining
that aims to predict, among other things, the characteristics of the next event
or the sequence of next events. Although multiple approaches based on deep
learning have been proposed, mainly recurrent neural networks and convolutional
neural networks, none of them really exploit the structural information
available in process models. This paper proposes an approach based on graph
convolutional networks and recurrent neural networks that uses information
directly from the process model. An experimental evaluation on real-life event
logs shows that our approach is more consistent and outperforms the current
state-of-the-art approaches.
Related papers
- Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Opening the Black Box: predicting the trainability of deep neural networks with reconstruction entropy [0.0]
We present a method for predicting the trainable regime in parameter space for deep feedforward neural networks.
For both the MNIST and CIFAR10 datasets, we show that a single epoch of training is sufficient to predict the trainability of the deep feedforward network.
arXiv Detail & Related papers (2024-06-13T18:00:05Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Challenges in Pre-Training Graph Neural Networks for Context-Based Fake
News Detection: An Evaluation of Current Strategies and Resource Limitations [1.9870554622325414]
We propose to apply pre-training of Graph Neural Networks (GNNs) in the domain of context-based fake news detection.
Our experiments provide an evaluation of different pre-training strategies for graph-based misinformation detection.
We argue that a major current issue is the lack of suitable large-scale resources that can be used for pre-training.
arXiv Detail & Related papers (2024-02-28T09:10:25Z) - Understanding Activation Patterns in Artificial Neural Networks by
Exploring Stochastic Processes [0.0]
We propose utilizing the framework of processes, which has been underutilized thus far.
We focus solely on activation frequency, leveraging neuroscience techniques used for real neuron spike trains.
We derive parameters describing activation patterns in each network, revealing consistent differences across architectures and training sets.
arXiv Detail & Related papers (2023-08-01T22:12:30Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - CCasGNN: Collaborative Cascade Prediction Based on Graph Neural Networks [0.49269463638915806]
Cascade prediction aims at modeling information diffusion in the network.
Recent efforts devoted to combining network structure and sequence features by graph neural networks and recurrent neural networks.
We propose a novel method CCasGNN considering the individual profile, structural features, and sequence information.
arXiv Detail & Related papers (2021-12-07T11:37:36Z) - Masking Neural Networks Using Reachability Graphs to Predict Process
Events [0.0]
Decay Replay Mining is a deep learning method that utilizes process model notations to predict the next event.
This paper proposes an approach to further interlock the process model of Replay Mining with its neural network for next event prediction.
arXiv Detail & Related papers (2021-08-01T09:06:55Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.