Recovery command generation towards automatic recovery in ICT systems by
Seq2Seq learning
- URL: http://arxiv.org/abs/2003.10784v1
- Date: Tue, 24 Mar 2020 11:34:10 GMT
- Title: Recovery command generation towards automatic recovery in ICT systems by
Seq2Seq learning
- Authors: Hiroki Ikeuchi, Akio Watanabe, Tsutomu Hirao, Makoto Morishita,
Masaaki Nishino, Yoichi Matsuo, Keishiro Watanabe
- Abstract summary: We propose a method of estimating recovery commands by using Seq2Seq, a neural network model.
When a new failure occurs, our method estimates plausible commands that recover from the failure on the basis of collected logs.
- Score: 11.387419806996599
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increase in scale and complexity of ICT systems, their operation
increasingly requires automatic recovery from failures. Although it has become
possible to automatically detect anomalies and analyze root causes of failures
with current methods, making decisions on what commands should be executed to
recover from failures still depends on manual operation, which is quite
time-consuming. Toward automatic recovery, we propose a method of estimating
recovery commands by using Seq2Seq, a neural network model. This model learns
complex relationships between logs obtained from equipment and recovery
commands that operators executed in the past. When a new failure occurs, our
method estimates plausible commands that recover from the failure on the basis
of collected logs. We conducted experiments using a synthetic dataset and
realistic OpenStack dataset, demonstrating that our method can estimate
recovery commands with high accuracy.
Related papers
- RecoveryChaining: Learning Local Recovery Policies for Robust Manipulation [41.38308130776887]
We propose to use hierarchical reinforcement learning to learn a separate recovery policy for a robot.
The recovery policy is triggered when a failure is detected based on sensory observations and seeks to take the robot to a state from which it can complete the task.
We evaluate our approach in three multi-step manipulation tasks with sparse rewards, where it learns significantly more robust recovery policies than those learned by baselines.
arXiv Detail & Related papers (2024-10-17T19:14:43Z) - Learning to Recover from Plan Execution Errors during Robot Manipulation: A Neuro-symbolic Approach [7.768747914019512]
We propose an approach (blending learning with symbolic search) for automated error discovery and recovery.
We present an anytime version of our algorithm, where instead of recovering to the last correct state, we search for a sub-goal in the original plan.
arXiv Detail & Related papers (2024-05-29T10:03:57Z) - Recover: A Neuro-Symbolic Framework for Failure Detection and Recovery [2.0554045007430672]
This paper introduces Recover, a neuro-symbolic framework for online failure identification and recovery.
By integrating logical rules, and LLM-based planners, Recover exploits symbolic information to enhance the ability of LLMs to generate recovery plans.
arXiv Detail & Related papers (2024-03-31T17:54:22Z) - ReST meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent [50.508669199496474]
We develop a ReAct-style LLM agent with the ability to reason and act upon external knowledge.
We refine the agent through a ReST-like method that iteratively trains on previous trajectories.
Starting from a prompted large model and after just two iterations of the algorithm, we can produce a fine-tuned small model.
arXiv Detail & Related papers (2023-12-15T18:20:15Z) - Automaton Distillation: Neuro-Symbolic Transfer Learning for Deep
Reinforcement Learning [11.31386674125334]
Reinforcement learning (RL) is a powerful tool for finding optimal policies in sequential decision processes.
Deep RL methods suffer from two weaknesses: collecting the amount of agent experience required for practical RL problems is prohibitively expensive, and the learned policies exhibit poor generalization on tasks outside of the training distribution.
We introduce automaton distillation, a form of neuro-symbolic transfer learning in which Q-value estimates from a teacher are distilled into a low-dimensional representation in the form of an automaton.
arXiv Detail & Related papers (2023-10-29T19:59:55Z) - REBOOT: Reuse Data for Bootstrapping Efficient Real-World Dexterous
Manipulation [61.7171775202833]
We introduce an efficient system for learning dexterous manipulation skills withReinforcement learning.
The main idea of our approach is the integration of recent advances in sample-efficient RL and replay buffer bootstrapping.
Our system completes the real-world training cycle by incorporating learned resets via an imitation-based pickup policy.
arXiv Detail & Related papers (2023-09-06T19:05:31Z) - Representing Timed Automata and Timing Anomalies of Cyber-Physical
Production Systems in Knowledge Graphs [51.98400002538092]
This paper aims to improve model-based anomaly detection in CPPS by combining the learned timed automaton with a formal knowledge graph about the system.
Both the model and the detected anomalies are described in the knowledge graph in order to allow operators an easier interpretation of the model and the detected anomalies.
arXiv Detail & Related papers (2023-08-25T15:25:57Z) - TELESTO: A Graph Neural Network Model for Anomaly Classification in
Cloud Services [77.454688257702]
Machine learning (ML) and artificial intelligence (AI) are applied on IT system operation and maintenance.
One direction aims at the recognition of re-occurring anomaly types to enable remediation automation.
We propose a method that is invariant to dimensionality changes of given data.
arXiv Detail & Related papers (2021-02-25T14:24:49Z) - A Novel Anomaly Detection Algorithm for Hybrid Production Systems based
on Deep Learning and Timed Automata [73.38551379469533]
DAD:DeepAnomalyDetection is a new approach for automatic model learning and anomaly detection in hybrid production systems.
It combines deep learning and timed automata for creating behavioral model from observations.
The algorithm has been applied to few data sets including two from real systems and has shown promising results.
arXiv Detail & Related papers (2020-10-29T08:27:43Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.