Learning Optimal Control and Dynamical Structure of Global Trajectory Search Problems with Diffusion Models
- URL: http://arxiv.org/abs/2410.02976v2
- Date: Sun, 29 Dec 2024 22:13:12 GMT
- Title: Learning Optimal Control and Dynamical Structure of Global Trajectory Search Problems with Diffusion Models
- Authors: Jannik Graebner, Anjian Li, Amlan Sinha, Ryne Beeson,
- Abstract summary: This paper explores two global search problems in the circular restricted three-body problem.
We build on our prior generative machine learning framework to apply diffusion models to learn the conditional probability distribution.
- Score: 0.5399800035598186
- License:
- Abstract: Spacecraft trajectory design is a global search problem, where previous work has revealed specific solution structures that can be captured with data-driven methods. This paper explores two global search problems in the circular restricted three-body problem: hybrid cost function of minimum fuel/time-of-flight and transfers to energy-dependent invariant manifolds. These problems display a fundamental structure either in the optimal control profile or the use of dynamical structures. We build on our prior generative machine learning framework to apply diffusion models to learn the conditional probability distribution of the search problem and analyze the model's capability to capture these structures.
Related papers
- Global Search for Optimal Low Thrust Spacecraft Trajectories using Diffusion Models and the Indirect Method [0.0]
Long time-duration low-thrust nonlinear optimal spacecraft trajectory global search is a computationally and time expensive problem.
Generative machine learning models can be trained to learn how the solution structure varies with respect to a conditional parameter.
State-of-the-art diffusion models are integrated with the indirect approach for trajectory optimization within a global search framework.
arXiv Detail & Related papers (2025-01-13T01:49:17Z) - Global Search of Optimal Spacecraft Trajectories using Amortization and Deep Generative Models [0.5898893619901381]
We formulate the parameterized global search problem as the task of sampling a conditional probability distribution with support on the neighborhoods of local basins of attraction to the high quality solutions.
The approach is benchmarked on a low thrust spacecraft trajectory optimization problem in the circular restricted three-body problem.
The paper also provides an in-depth analysis of the multi-modal funnel structure of a low-thrust spacecraft trajectory optimization problem.
arXiv Detail & Related papers (2024-12-28T04:57:12Z) - Diffusion Models as Network Optimizers: Explorations and Analysis [71.69869025878856]
generative diffusion models (GDMs) have emerged as a promising new approach to network optimization.
In this study, we first explore the intrinsic characteristics of generative models.
We provide a concise theoretical and intuitive demonstration of the advantages of generative models over discriminative network optimization.
arXiv Detail & Related papers (2024-11-01T09:05:47Z) - ComboStoc: Combinatorial Stochasticity for Diffusion Generative Models [65.82630283336051]
We show that the space spanned by the combination of dimensions and attributes is insufficiently sampled by existing training scheme of diffusion generative models.
We present a simple fix to this problem by constructing processes that fully exploit the structures, hence the name ComboStoc.
arXiv Detail & Related papers (2024-05-22T15:23:10Z) - Amortized Global Search for Efficient Preliminary Trajectory Design with
Deep Generative Models [1.1602089225841632]
Preliminary trajectory design is a global problem that seeks multiple qualitatively different solutions to a trajectory optimization problem.
In this paper, we exploit the structure in the solutions to propose an solutions amortized global search (AGS) framework.
Our method is evaluated using Derust's 5th function and a low-th circular restricted three-body problem.
arXiv Detail & Related papers (2023-08-07T23:52:03Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - Exploring Neural Models for Query-Focused Summarization [74.41256438059256]
We conduct a systematic exploration of neural approaches to query-focused summarization (QFS)
We present two model extensions that achieve state-of-the-art performance on the QMSum dataset by a margin of up to 3.38 ROUGE-1, 3.72 ROUGE-2, and 3.28 ROUGE-L.
arXiv Detail & Related papers (2021-12-14T18:33:29Z) - Triple-level Model Inferred Collaborative Network Architecture for Video
Deraining [43.06607185181434]
We develop a model-guided triple-level optimization framework to deduce network architecture with cooperating optimization and auto-searching mechanism.
Our model shows significant improvements in fidelity and temporal consistency over the state-of-the-art works.
arXiv Detail & Related papers (2021-11-08T13:09:00Z) - Learning Discrete Energy-based Models via Auxiliary-variable Local
Exploration [130.89746032163106]
We propose ALOE, a new algorithm for learning conditional and unconditional EBMs for discrete structured data.
We show that the energy function and sampler can be trained efficiently via a new variational form of power iteration.
We present an energy model guided fuzzer for software testing that achieves comparable performance to well engineered fuzzing engines like libfuzzer.
arXiv Detail & Related papers (2020-11-10T19:31:29Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z) - A Deep Reinforcement Learning Algorithm Using Dynamic Attention Model
for Vehicle Routing Problems [20.52666896700441]
This paper focuses on a challenging NP-hard problem, vehicle routing problem.
Our model outperforms the previous methods and also shows a good generalization performance.
arXiv Detail & Related papers (2020-02-09T04:51:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.