Statistical process monitoring of artificial neural networks
- URL: http://arxiv.org/abs/2209.07436v2
- Date: Thu, 27 Jul 2023 07:47:49 GMT
- Title: Statistical process monitoring of artificial neural networks
- Authors: Anna Malinovskaya, Pavlo Mozharovskyi, Philipp Otto
- Abstract summary: In machine learning, the learned relationship between the input and the output must remain valid during the model's deployment.
We propose considering the latent feature representation of the data (called "embedding") generated by the ANN to determine the time when the data stream starts being nonstationary.
- Score: 1.3213490507208525
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid advancement of models based on artificial intelligence demands
innovative monitoring techniques which can operate in real time with low
computational costs. In machine learning, especially if we consider artificial
neural networks (ANNs), the models are often trained in a supervised manner.
Consequently, the learned relationship between the input and the output must
remain valid during the model's deployment. If this stationarity assumption
holds, we can conclude that the ANN provides accurate predictions. Otherwise,
the retraining or rebuilding of the model is required. We propose considering
the latent feature representation of the data (called "embedding") generated by
the ANN to determine the time when the data stream starts being nonstationary.
In particular, we monitor embeddings by applying multivariate control charts
based on the data depth calculation and normalized ranks. The performance of
the introduced method is compared with benchmark approaches for various ANN
architectures and different underlying data formats.
Related papers
- Diffusion-Model-Assisted Supervised Learning of Generative Models for
Density Estimation [10.793646707711442]
We present a framework for training generative models for density estimation.
We use the score-based diffusion model to generate labeled data.
Once the labeled data are generated, we can train a simple fully connected neural network to learn the generative model in the supervised manner.
arXiv Detail & Related papers (2023-10-22T23:56:19Z) - Neural Differential Recurrent Neural Network with Adaptive Time Steps [11.999568208578799]
We propose an RNN-based model, called RNN-ODE-Adap, that uses a neural ODE to represent the time development of the hidden states.
We adaptively select time steps based on the steepness of changes of the data over time so as to train the model more efficiently for the "spike-like" time series.
arXiv Detail & Related papers (2023-06-02T16:46:47Z) - Brain-Inspired Spiking Neural Network for Online Unsupervised Time
Series Prediction [13.521272923545409]
We present a novel Continuous Learning-based Unsupervised Recurrent Spiking Neural Network Model (CLURSNN)
CLURSNN makes online predictions by reconstructing the underlying dynamical system using Random Delay Embedding.
We show that the proposed online time series prediction methodology outperforms state-of-the-art DNN models when predicting an evolving Lorenz63 dynamical system.
arXiv Detail & Related papers (2023-04-10T16:18:37Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - A data filling methodology for time series based on CNN and (Bi)LSTM
neural networks [0.0]
We develop two Deep Learning models aimed at filling data gaps in time series obtained from monitored apartments in Bolzano, Italy.
Our approach manages to capture the fluctuating nature of the data and shows good accuracy in reconstructing the target time series.
arXiv Detail & Related papers (2022-04-21T09:40:30Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.