Feasible Low-thrust Trajectory Identification via a Deep Neural Network
Classifier
- URL: http://arxiv.org/abs/2202.04962v1
- Date: Thu, 10 Feb 2022 11:34:37 GMT
- Title: Feasible Low-thrust Trajectory Identification via a Deep Neural Network
Classifier
- Authors: Ruida Xie, Andrew G. Dempster
- Abstract summary: This work proposes a deep neural network (DNN) to accurately identify feasible low thrust transfer prior to the optimization process.
The DNN-classifier achieves an overall accuracy of 97.9%, which has the best performance among the tested algorithms.
- Score: 1.5076964620370268
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, deep learning techniques have been introduced into the field
of trajectory optimization to improve convergence and speed. Training such
models requires large trajectory datasets. However, the convergence of low
thrust (LT) optimizations is unpredictable before the optimization process
ends. For randomly initialized low thrust transfer data generation, most of the
computation power will be wasted on optimizing infeasible low thrust transfers,
which leads to an inefficient data generation process. This work proposes a
deep neural network (DNN) classifier to accurately identify feasible LT
transfer prior to the optimization process. The DNN-classifier achieves an
overall accuracy of 97.9%, which has the best performance among the tested
algorithms. The accurate low-thrust trajectory feasibility identification can
avoid optimization on undesired samples, so that the majority of the optimized
samples are LT trajectories that converge. This technique enables efficient
dataset generation for different mission scenarios with different spacecraft
configurations.
Related papers
- Learning to Generate Gradients for Test-Time Adaptation via Test-Time Training Layers [18.921532965557475]
Test-time adaptation aims to fine-tune a trained model online using unlabeled testing data.
In this optimization process, unsupervised learning objectives like entropy frequently encounter noisy learning signals.
We employ a learning-to-optimize approach to automatically learn an entropy generator called Meta Gradient Generator.
arXiv Detail & Related papers (2024-12-22T07:24:09Z) - GDSG: Graph Diffusion-based Solution Generator for Optimization Problems in MEC Networks [109.17835015018532]
We present a Graph Diffusion-based Solution Generation (GDSG) method.
This approach is designed to work with suboptimal datasets while converging to the optimal solution large probably.
We build GDSG as a multi-task diffusion model utilizing a Graph Neural Network (GNN) to acquire the distribution of high-quality solutions.
arXiv Detail & Related papers (2024-12-11T11:13:43Z) - Functional Graphical Models: Structure Enables Offline Data-Driven Optimization [111.28605744661638]
We show how structure can enable sample-efficient data-driven optimization.
We also present a data-driven optimization algorithm that infers the FGM structure itself.
arXiv Detail & Related papers (2024-01-08T22:33:14Z) - Prior-mean-assisted Bayesian optimization application on FRIB Front-End
tunning [61.78406085010957]
We exploit a neural network model trained over historical data as a prior mean of BO for FRIB Front-End tuning.
In this paper, we exploit a neural network model trained over historical data as a prior mean of BO for FRIB Front-End tuning.
arXiv Detail & Related papers (2022-11-11T18:34:15Z) - Analytical Characterization and Design Space Exploration for
Optimization of CNNs [10.15406080228806]
Loop-level optimization, including loop tiling and loop permutation, are fundamental transformations to reduce data movement.
This paper develops an analytical modeling approach for finding the best loop-level optimization configuration for CNNs on multi-core CPUs.
arXiv Detail & Related papers (2021-01-24T21:36:52Z) - Improving Neural Network Training in Low Dimensional Random Bases [5.156484100374058]
We show that keeping the random projection fixed throughout training is detrimental to optimization.
We propose re-drawing the random subspace at each step, which yields significantly better performance.
We realize further improvements by applying independent projections to different parts of the network, making the approximation more efficient as network dimensionality grows.
arXiv Detail & Related papers (2020-11-09T19:50:19Z) - AutoSimulate: (Quickly) Learning Synthetic Data Generation [70.82315853981838]
We propose an efficient alternative for optimal synthetic data generation based on a novel differentiable approximation of the objective.
We demonstrate that the proposed method finds the optimal data distribution faster (up to $50times$), with significantly reduced training data generation (up to $30times$) and better accuracy ($+8.7%$) on real-world test datasets than previous methods.
arXiv Detail & Related papers (2020-08-16T11:36:11Z) - Enhanced data efficiency using deep neural networks and Gaussian
processes for aerodynamic design optimization [0.0]
Adjoint-based optimization methods are attractive for aerodynamic shape design.
They can become prohibitively expensive when multiple optimization problems are being solved.
We propose a machine learning enabled, surrogate-based framework that replaces the expensive adjoint solver.
arXiv Detail & Related papers (2020-08-15T15:09:21Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.