Density Estimation for Entry Guidance Problems using Deep Learning
- URL: http://arxiv.org/abs/2310.19684v1
- Date: Mon, 30 Oct 2023 16:03:37 GMT
- Title: Density Estimation for Entry Guidance Problems using Deep Learning
- Authors: Jens A. Rataczak, Davide Amato, Jay W. McMahon
- Abstract summary: A long short-term memory neural network is trained to learn the mapping between measurements available onboard an entry vehicle and the density profile through which it is flying.
The trained LSTM is capable of both predicting the density profile through which the vehicle will fly and reconstructing the density profile through which it has already flown.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work presents a deep-learning approach to estimate atmospheric density
profiles for use in planetary entry guidance problems. A long short-term memory
(LSTM) neural network is trained to learn the mapping between measurements
available onboard an entry vehicle and the density profile through which it is
flying. Measurements include the spherical state representation, Cartesian
sensed acceleration components, and a surface-pressure measurement. Training
data for the network is initially generated by performing a Monte Carlo
analysis of an entry mission at Mars using the fully numerical
predictor-corrector guidance (FNPEG) algorithm that utilizes an exponential
density model, while the truth density profiles are sampled from MarsGRAM. A
curriculum learning procedure is developed to refine the LSTM network's
predictions for integration within the FNPEG algorithm. The trained LSTM is
capable of both predicting the density profile through which the vehicle will
fly and reconstructing the density profile through which it has already flown.
The performance of the FNPEG algorithm is assessed for three different density
estimation techniques: an exponential model, an exponential model augmented
with a first-order fading-memory filter, and the LSTM network. Results
demonstrate that using the LSTM model results in superior terminal accuracy
compared to the other two techniques when considering both noisy and noiseless
measurements.
Related papers
- Scaling Laws for Predicting Downstream Performance in LLMs [75.28559015477137]
This work focuses on the pre-training loss as a more-efficient metric for performance estimation.
We extend the power law analytical function to predict domain-specific pre-training loss based on FLOPs across data sources.
We employ a two-layer neural network to model the non-linear relationship between multiple domain-specific loss and downstream performance.
arXiv Detail & Related papers (2024-10-11T04:57:48Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Self-learning locally-optimal hypertuning using maximum entropy, and
comparison of machine learning approaches for estimating fatigue life in
composite materials [0.0]
We develop an ML nearest-neighbors-alike algorithm based on the principle of maximum entropy to predict fatigue damage.
The predictions achieve a good level of accuracy, similar to other ML algorithms.
arXiv Detail & Related papers (2022-10-19T12:20:07Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Estimating permeability of 3D micro-CT images by physics-informed CNNs
based on DNS [1.6274397329511197]
This paper presents a novel methodology for permeability prediction from micro-CT scans of geological rock samples.
The training data set for CNNs dedicated to permeability prediction consists of permeability labels that are typically generated by classical lattice Boltzmann methods (LBM)
We instead perform direct numerical simulation (DNS) by solving the stationary Stokes equation in an efficient and distributed-parallel manner.
arXiv Detail & Related papers (2021-09-04T08:43:19Z) - A Physics-Informed Deep Learning Paradigm for Traffic State Estimation
and Fundamental Diagram Discovery [3.779860024918729]
This paper contributes an improved paradigm, called physics-informed deep learning with a fundamental diagram learner (PIDL+FDL)
PIDL+FDL integrates ML terms into the model-driven component to learn a functional form of a fundamental diagram (FD), i.e., a mapping from traffic density to flow or velocity.
We demonstrate the use of PIDL+FDL to solve popular first-order and second-order traffic flow models and reconstruct the FD relation.
arXiv Detail & Related papers (2021-06-06T14:54:32Z) - Learning Optical Flow from a Few Matches [67.83633948984954]
We show that the dense correlation volume representation is redundant and accurate flow estimation can be achieved with only a fraction of elements in it.
Experiments show that our method can reduce computational cost and memory use significantly, while maintaining high accuracy.
arXiv Detail & Related papers (2021-04-05T21:44:00Z) - Mission-Aware Spatio-Temporal Deep Learning Model for UAS Instantaneous
Density Prediction [3.59465210252619]
Number of daily sUAS operations in uncontrolled low altitude airspace is expected to reach into the millions in a few years.
Deep learning-based UAS instantaneous density prediction model is presented.
arXiv Detail & Related papers (2020-03-22T02:40:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.