Tight Constraint Prediction of Six-Degree-of-Freedom Transformer-based Powered Descent Guidance
- URL: http://arxiv.org/abs/2501.00930v1
- Date: Wed, 01 Jan 2025 19:07:27 GMT
- Title: Tight Constraint Prediction of Six-Degree-of-Freedom Transformer-based Powered Descent Guidance
- Authors: Julia Briden, Trey Gurga, Breanna Johnson, Abhishek Cauligi, Richard Linares,
- Abstract summary: This work introduces Transformer-based Successive Convexification (T-SCSCx) for efficient six-time magnitude-of divergence times.<n>By learning to predict the set of constraints at the optimal problem's solution, T-SCSCx creates minimal the reduced-size problem with only the tight constraints.
- Score: 1.1254693939127909
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work introduces Transformer-based Successive Convexification (T-SCvx), an extension of Transformer-based Powered Descent Guidance (T-PDG), generalizable for efficient six-degree-of-freedom (DoF) fuel-optimal powered descent trajectory generation. Our approach significantly enhances the sample efficiency and solution quality for nonconvex-powered descent guidance by employing a rotation invariant transformation of the sampled dataset. T-PDG was previously applied to the 3-DoF minimum fuel powered descent guidance problem, improving solution times by up to an order of magnitude compared to lossless convexification (LCvx). By learning to predict the set of tight or active constraints at the optimal control problem's solution, Transformer-based Successive Convexification (T-SCvx) creates the minimal reduced-size problem initialized with only the tight constraints, then uses the solution of this reduced problem to warm-start the direct optimization solver. 6-DoF powered descent guidance is known to be challenging to solve quickly and reliably due to the nonlinear and non-convex nature of the problem, the discretization scheme heavily influencing solution validity, and reference trajectory initialization determining algorithm convergence or divergence. Our contributions in this work address these challenges by extending T-PDG to learn the set of tight constraints for the successive convexification (SCvx) formulation of the 6-DoF powered descent guidance problem. In addition to reducing the problem size, feasible and locally optimal reference trajectories are also learned to facilitate convergence from the initial guess. T-SCvx enables onboard computation of real-time guidance trajectories, demonstrated by a 6-DoF Mars powered landing application problem.
Related papers
- Nonconvex Optimization Framework for Group-Sparse Feedback Linear-Quadratic Optimal Control: Non-Penalty Approach [3.585860184121598]
We address the challenge of tuning the penalty parameter and the risk of introducing stationary points.<n>Our results enable direct group-sparse feedback design gains without resorting to certain assumptions.
arXiv Detail & Related papers (2025-07-26T09:50:21Z) - Towards Robust Spacecraft Trajectory Optimization via Transformers [17.073280827888226]
We develop an autonomous generative model to solve non- optimal control problems in real-time.
We extend the capabilities of ART to address robust chance-constrained optimal control problems.
This work marks an initial step toward the reliable deployment of AI-driven solutions in safety-critical autonomous systems such as spacecraft.
arXiv Detail & Related papers (2024-10-08T00:58:42Z) - Improving Computational Efficiency for Powered Descent Guidance via
Transformer-based Tight Constraint Prediction [1.2074552857379275]
Transformer-based Powered Descent Guidance (T-PDG) is a scalable algorithm for reducing the computational complexity of the direct optimization formulation of the spacecraft powered descent guidance problem.
T-PDG uses data from prior runs of trajectory optimization algorithms to train a transformer neural network, which accurately predicts the relationship between problem parameters.
A safe and optimal solution is guaranteed by including a feasibility check in T-PDG before returning the final trajectory.
arXiv Detail & Related papers (2023-11-09T04:26:25Z) - Revisiting Implicit Differentiation for Learning Problems in Optimal
Control [31.622109513774635]
This paper proposes a new method for differentiating through optimal trajectories arising from non- discrete, constrained discrete optimal control (COC) problems.
We show that the trajectory derivatives scale linearly with the number of timesteps and significantly improved scalability with model size.
arXiv Detail & Related papers (2023-10-23T00:51:24Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - Unsupervised Optimal Power Flow Using Graph Neural Networks [172.33624307594158]
We use a graph neural network to learn a nonlinear parametrization between the power demanded and the corresponding allocation.
We show through simulations that the use of GNNs in this unsupervised learning context leads to solutions comparable to standard solvers.
arXiv Detail & Related papers (2022-10-17T17:30:09Z) - A Variance-Reduced Stochastic Gradient Tracking Algorithm for
Decentralized Optimization with Orthogonality Constraints [7.028225540638832]
We propose a novel algorithm for decentralized optimization with orthogonality constraints.
VRSGT is the first algorithm for decentralized optimization with orthogonality constraints that reduces both sampling and communication complexities simultaneously.
In the numerical experiments, VRGTS has a promising performance in a real-world autonomous sample.
arXiv Detail & Related papers (2022-08-29T14:46:44Z) - Deep $\mathcal{L}^1$ Stochastic Optimal Control Policies for Planetary
Soft-landing [9.714390258486569]
We introduce a novel deep learning based solution to the Powered-Descent Guidance (PDG) problem.
Our SOC can handle practically useful $mathcalL1 constraints pre-specified for minimum fuel consumption.
We demonstrate that our controller can successfully and safely land all trajectories at the base of an inverted cone while minimizing fuel consumption.
arXiv Detail & Related papers (2021-09-01T04:28:38Z) - Boosting Data Reduction for the Maximum Weight Independent Set Problem
Using Increasing Transformations [59.84561168501493]
We introduce new generalized data reduction and transformation rules for the maximum weight independent set problem.
Surprisingly, these so-called increasing transformations can simplify the problem and also open up the reduction space to yield even smaller irreducible graphs later in the algorithm.
Our algorithm computes significantly smaller irreducible graphs on all except one instance, solves more instances to optimality than previously possible, is up to two orders of magnitude faster than the best state-of-the-art solver, and finds higher-quality solutions than solvers DynWVC and HILS.
arXiv Detail & Related papers (2020-08-12T08:52:50Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Balancing Rates and Variance via Adaptive Batch-Size for Stochastic
Optimization Problems [120.21685755278509]
In this work, we seek to balance the fact that attenuating step-size is required for exact convergence with the fact that constant step-size learns faster in time up to an error.
Rather than fixing the minibatch the step-size at the outset, we propose to allow parameters to evolve adaptively.
arXiv Detail & Related papers (2020-07-02T16:02:02Z) - Cogradient Descent for Bilinear Optimization [124.45816011848096]
We introduce a Cogradient Descent algorithm (CoGD) to address the bilinear problem.
We solve one variable by considering its coupling relationship with the other, leading to a synchronous gradient descent.
Our algorithm is applied to solve problems with one variable under the sparsity constraint.
arXiv Detail & Related papers (2020-06-16T13:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.