Using LSTM for the Prediction of Disruption in ADITYA Tokamak
- URL: http://arxiv.org/abs/2007.06230v1
- Date: Mon, 13 Jul 2020 08:06:43 GMT
- Title: Using LSTM for the Prediction of Disruption in ADITYA Tokamak
- Authors: Aman Agarwal, Aditya Mishra, Priyanka Sharma, Swati Jain, Sutapa
Ranjan, Ranjana Manchanda
- Abstract summary: Major disruptions in tokamak pose a serious threat to the vessel and its surrounding pieces of equipment.
Many machine learning techniques have already been in use at large tokamaks like JET and ASDEX, but are not suitable for ADITYA.
We discuss a new real-time approach to predict the time of disruption in ADITYA tokamak and validate the results on an experimental dataset.
- Score: 3.9022413625157792
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Major disruptions in tokamak pose a serious threat to the vessel and its
surrounding pieces of equipment. The ability of the systems to detect any
behavior that can lead to disruption can help in alerting the system beforehand
and prevent its harmful effects. Many machine learning techniques have already
been in use at large tokamaks like JET and ASDEX, but are not suitable for
ADITYA, which is comparatively small. Through this work, we discuss a new
real-time approach to predict the time of disruption in ADITYA tokamak and
validate the results on an experimental dataset. The system uses selected
diagnostics from the tokamak and after some pre-processing steps, sends them to
a time-sequence Long Short-Term Memory (LSTM) network. The model can make the
predictions 12 ms in advance at less computation cost that is quick enough to
be deployed in real-time applications.
Related papers
- Machine Learning with Real-time and Small Footprint Anomaly Detection System for In-Vehicle Gateway [6.9113469208163245]
We propose to use the self-information theory to generate values for training and testing models.
Our proposed method achieves 8.7 times lower False Positive Rate (FPR), 1.77 times faster testing time, and 4.88 times smaller footprint.
arXiv Detail & Related papers (2024-06-24T07:23:52Z) - DAISY: Data Adaptive Self-Supervised Early Exit for Speech Representation Models [55.608981341747246]
We introduce Data Adaptive Self-Supervised Early Exit (DAISY), an approach that decides when to exit based on the self-supervised loss.
Our analysis on the adaptivity of DAISY shows that the model exits early (using fewer layers) on clean data while exits late (using more layers) on noisy data.
arXiv Detail & Related papers (2024-06-08T12:58:13Z) - The Adversarial Implications of Variable-Time Inference [47.44631666803983]
We present an approach that exploits a novel side channel in which the adversary simply measures the execution time of the algorithm used to post-process the predictions of the ML model under attack.
We investigate leakage from the non-maximum suppression (NMS) algorithm, which plays a crucial role in the operation of object detectors.
We demonstrate attacks against the YOLOv3 detector, leveraging the timing leakage to successfully evade object detection using adversarial examples, and perform dataset inference.
arXiv Detail & Related papers (2023-09-05T11:53:17Z) - Uncertainty aware anomaly detection to predict errant beam pulses in the
SNS accelerator [47.187609203210705]
We describe the application of an uncertainty aware Machine Learning method, the Siamese neural network model, to predictupcoming errant beam pulses.
By predicting theupcoming failure, we can stop the accelerator before damage occurs.
arXiv Detail & Related papers (2021-10-22T18:37:22Z) - Cloud Failure Prediction with Hierarchical Temporary Memory: An
Empirical Assessment [64.73243241568555]
Hierarchical Temporary Memory (HTM) is an unsupervised learning algorithm inspired by the features of the neocortex.
This paper presents the first systematic study that assesses HTM in the context of failure prediction.
arXiv Detail & Related papers (2021-10-06T07:09:45Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Deep Anomaly Detection for Time-series Data in Industrial IoT: A
Communication-Efficient On-device Federated Learning Approach [40.992167455141946]
This paper proposes a new communication-efficient on-device federated learning (FL)-based deep anomaly detection framework for sensing time-series data in IIoT.
We first introduce a FL framework to enable decentralized edge devices to collaboratively train an anomaly detection model, which can improve its generalization ability.
Second, we propose an Attention Mechanism-based Convolutional Neural Network-Long Short Term Memory (AMCNN-LSTM) model to accurately detect anomalies.
Third, to adapt the proposed framework to the timeliness of industrial anomaly detection, we propose a gradient compression mechanism based on Top-textitk selection to
arXiv Detail & Related papers (2020-07-19T16:47:26Z) - Predictive Maintenance for Edge-Based Sensor Networks: A Deep
Reinforcement Learning Approach [68.40429597811071]
The risk of unplanned equipment downtime can be minimized through Predictive Maintenance of revenue generating assets.
A model-free Deep Reinforcement Learning algorithm is proposed for predictive equipment maintenance from an equipment-based sensor network context.
Unlike traditional black-box regression models, the proposed algorithm self-learns an optimal maintenance policy and provides actionable recommendation for each equipment.
arXiv Detail & Related papers (2020-07-07T10:00:32Z) - ReRe: A Lightweight Real-time Ready-to-Go Anomaly Detection Approach for
Time Series [0.27528170226206433]
This paper introduces ReRe, a Real-time Ready-to-go proactive Anomaly Detection algorithm for streaming time series.
ReRe employs two lightweight Long Short-Term Memory (LSTM) models to predict and jointly determine whether or not an upcoming data point is anomalous.
Experiments based on real-world time-series datasets demonstrate the good performance of ReRe in real-time anomaly detection.
arXiv Detail & Related papers (2020-04-05T21:26:24Z) - Real-time Out-of-distribution Detection in Learning-Enabled
Cyber-Physical Systems [1.4213973379473654]
Cyber-physical systems benefit by using machine learning components that can handle the uncertainty and variability of the real-world.
Deep neural networks, however, introduce new types of hazards that may impact system safety.
Out-of-distribution data may lead to a large error and compromise safety.
arXiv Detail & Related papers (2020-01-28T17:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.