Genetically Optimized Prediction of Remaining Useful Life
- URL: http://arxiv.org/abs/2102.08845v1
- Date: Wed, 17 Feb 2021 16:09:23 GMT
- Title: Genetically Optimized Prediction of Remaining Useful Life
- Authors: Shaashwat Agrawal, Sagnik Sarkar, Gautam Srivastava, Praveen Kumar
Reddy Maddikunta, Thippa Reddy Gadekallu
- Abstract summary: We implement LSTM and GRU models and compare the obtained results with a proposed genetically trained neural network.
We hope to improve the consistency of the predictions by adding another layer of optimization using Genetic Algorithms.
These models and the proposed architecture are tested on the NASA Turbofan Jet Engine dataset.
- Score: 4.115847582689283
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The application of remaining useful life (RUL) prediction has taken great
importance in terms of energy optimization, cost-effectiveness, and risk
mitigation. The existing RUL prediction algorithms mostly constitute deep
learning frameworks. In this paper, we implement LSTM and GRU models and
compare the obtained results with a proposed genetically trained neural
network. The current models solely depend on Adam and SGD for optimization and
learning. Although the models have worked well with these optimizers, even
little uncertainties in prognostics prediction can result in huge losses. We
hope to improve the consistency of the predictions by adding another layer of
optimization using Genetic Algorithms. The hyper-parameters - learning rate and
batch size are optimized beyond manual capacity. These models and the proposed
architecture are tested on the NASA Turbofan Jet Engine dataset. The optimized
architecture can predict the given hyper-parameters autonomously and provide
superior results.
Related papers
- Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Edge-Efficient Deep Learning Models for Automatic Modulation Classification: A Performance Analysis [0.7428236410246183]
We investigate optimized convolutional neural networks (CNNs) developed for automatic modulation classification (AMC) of wireless signals.
We propose optimized models with the combinations of these techniques to fuse the complementary optimization benefits.
The experimental results show that the proposed individual and combined optimization techniques are highly effective for developing models with significantly less complexity.
arXiv Detail & Related papers (2024-04-11T06:08:23Z) - Functional Graphical Models: Structure Enables Offline Data-Driven Optimization [111.28605744661638]
We show how structure can enable sample-efficient data-driven optimization.
We also present a data-driven optimization algorithm that infers the FGM structure itself.
arXiv Detail & Related papers (2024-01-08T22:33:14Z) - Fine-Tuning Adaptive Stochastic Optimizers: Determining the Optimal Hyperparameter $ε$ via Gradient Magnitude Histogram Analysis [0.7366405857677226]
We introduce a new framework based on the empirical probability density function of the loss's magnitude, termed the "gradient magnitude histogram"
We propose a novel algorithm using gradient magnitude histograms to automatically estimate a refined and accurate search space for the optimal safeguard.
arXiv Detail & Related papers (2023-11-20T04:34:19Z) - AdaLomo: Low-memory Optimization with Adaptive Learning Rate [59.64965955386855]
We introduce low-memory optimization with adaptive learning rate (AdaLomo) for large language models.
AdaLomo results on par with AdamW, while significantly reducing memory requirements, thereby lowering the hardware barrier to training large language models.
arXiv Detail & Related papers (2023-10-16T09:04:28Z) - Target Variable Engineering [0.0]
We compare the predictive performance of regression models trained to predict numeric targets vs. classifiers trained to predict their binarized counterparts.
We find that regression requires significantly more computational effort to converge upon the optimal performance.
arXiv Detail & Related papers (2023-10-13T23:12:21Z) - Comparative Evaluation of Metaheuristic Algorithms for Hyperparameter
Selection in Short-Term Weather Forecasting [0.0]
This paper explores the application of metaheuristic algorithms, namely Genetic Algorithm (GA), Differential Evolution (DE) and Particle Swarm Optimization (PSO)
We evaluate their performance in weather forecasting based on metrics such as Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE)
arXiv Detail & Related papers (2023-09-05T22:13:35Z) - VeLO: Training Versatile Learned Optimizers by Scaling Up [67.90237498659397]
We leverage the same scaling approach behind the success of deep learning to learn versatiles.
We train an ingest for deep learning which is itself a small neural network that ingests and outputs parameter updates.
We open source our learned, meta-training code, the associated train test data, and an extensive benchmark suite with baselines at velo-code.io.
arXiv Detail & Related papers (2022-11-17T18:39:07Z) - DEBOSH: Deep Bayesian Shape Optimization [48.80431740983095]
We propose a novel uncertainty-based method tailored to shape optimization.
It enables effective BO and increases the quality of the resulting shapes beyond that of state-of-the-art approaches.
arXiv Detail & Related papers (2021-09-28T11:01:42Z) - A Study of Genetic Algorithms for Hyperparameter Optimization of Neural
Networks in Machine Translation [0.0]
We propose an automatic tuning method modeled after Darwin's Survival of the Fittest Theory via a Genetic Algorithm.
Research results show that the proposed method, a GA, outperforms a random selection of hyper parameters.
arXiv Detail & Related papers (2020-09-15T02:24:16Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.