Optimality Principles in Spacecraft Neural Guidance and Control
- URL: http://arxiv.org/abs/2305.13078v1
- Date: Mon, 22 May 2023 14:48:58 GMT
- Title: Optimality Principles in Spacecraft Neural Guidance and Control
- Authors: Dario Izzo, Emmanuel Blazquez, Robin Ferede, Sebastien Origer,
Christophe De Wagter, Guido C.H.E. de Croon
- Abstract summary: We argue that end-to-end neural guidance and control architectures (here called G&CNets) allow transferring onboard the burden of acting upon optimality principles.
In this way, the sensor information is transformed in real time into optimal plans thus increasing the mission autonomy and robustness.
We discuss the main results obtained in training such neural architectures in simulation for interplanetary transfers, landings and close proximity operations.
- Score: 16.59877059263942
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spacecraft and drones aimed at exploring our solar system are designed to
operate in conditions where the smart use of onboard resources is vital to the
success or failure of the mission. Sensorimotor actions are thus often derived
from high-level, quantifiable, optimality principles assigned to each task,
utilizing consolidated tools in optimal control theory. The planned actions are
derived on the ground and transferred onboard where controllers have the task
of tracking the uploaded guidance profile. Here we argue that end-to-end neural
guidance and control architectures (here called G&CNets) allow transferring
onboard the burden of acting upon these optimality principles. In this way, the
sensor information is transformed in real time into optimal plans thus
increasing the mission autonomy and robustness. We discuss the main results
obtained in training such neural architectures in simulation for interplanetary
transfers, landings and close proximity operations, highlighting the successful
learning of optimality principles by the neural model. We then suggest drone
racing as an ideal gym environment to test these architectures on real robotic
platforms, thus increasing confidence in their utilization on future space
exploration missions. Drone racing shares with spacecraft missions both limited
onboard computational capabilities and similar control structures induced from
the optimality principle sought, but it also entails different levels of
uncertainties and unmodelled effects. Furthermore, the success of G&CNets on
extremely resource-restricted drones illustrates their potential to bring
real-time optimal control within reach of a wider variety of robotic systems,
both in space and on Earth.
Related papers
- LLMSat: A Large Language Model-Based Goal-Oriented Agent for Autonomous Space Exploration [0.0]
This work explores the application of Large Language Models (LLMs) as the high-level control system of a spacecraft.
A series of deep space mission scenarios simulated within the popular game engine Kerbal Space Program are used as case studies to evaluate the implementation against the requirements.
arXiv Detail & Related papers (2024-04-13T03:33:17Z) - Reaching the Limit in Autonomous Racing: Optimal Control versus
Reinforcement Learning [66.10854214036605]
A central question in robotics is how to design a control system for an agile mobile robot.
We show that a neural network controller trained with reinforcement learning (RL) outperformed optimal control (OC) methods in this setting.
Our findings allowed us to push an agile drone to its maximum performance, achieving a peak acceleration greater than 12 times the gravitational acceleration and a peak velocity of 108 kilometers per hour.
arXiv Detail & Related papers (2023-10-17T02:40:27Z) - Learning and Adapting Agile Locomotion Skills by Transferring Experience [71.8926510772552]
We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks.
We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments.
arXiv Detail & Related papers (2023-04-19T17:37:54Z) - Inverted Landing in a Small Aerial Robot via Deep Reinforcement Learning
for Triggering and Control of Rotational Maneuvers [11.29285364660789]
Inverted landing in a rapid and robust manner is a challenging feat for aerial robots, especially while depending entirely on onboard sensing and computation.
Previous work has identified a direct causal connection between a series of onboard visual cues and kinematic actions that allow for reliable execution of this challenging aerobatic maneuver in small aerial robots.
In this work, we first utilized Deep Reinforcement Learning and a physics-based simulation to obtain a general, optimal control policy for robust inverted landing.
arXiv Detail & Related papers (2022-09-22T14:38:10Z) - Autonomous Aerial Robot for High-Speed Search and Intercept Applications [86.72321289033562]
A fully-autonomous aerial robot for high-speed object grasping has been proposed.
As an additional sub-task, our system is able to autonomously pierce balloons located in poles close to the surface.
Our approach has been validated in a challenging international competition and has shown outstanding results.
arXiv Detail & Related papers (2021-12-10T11:49:51Z) - OSCAR: Data-Driven Operational Space Control for Adaptive and Robust
Robot Manipulation [50.59541802645156]
Operational Space Control (OSC) has been used as an effective task-space controller for manipulation.
We propose OSC for Adaptation and Robustness (OSCAR), a data-driven variant of OSC that compensates for modeling errors.
We evaluate our method on a variety of simulated manipulation problems, and find substantial improvements over an array of controller baselines.
arXiv Detail & Related papers (2021-10-02T01:21:38Z) - Time-Optimal Planning for Quadrotor Waypoint Flight [50.016821506107455]
Planning time-optimal trajectories at the actuation limit of a quadrotor is an open problem.
We propose a solution while exploiting the full quadrotor's actuator potential.
We validate our method in real-world flights in one of the world's largest motion-capture systems.
arXiv Detail & Related papers (2021-08-10T09:26:43Z) - Advances in Trajectory Optimization for Space Vehicle Control [2.8557067479929152]
This survey paper provides a detailed overview of recent advances, successes, and promising directions for optimization-based space vehicle control.
The considered applications include planetary landing, rendezvous and proximity operations, small body landing, constrained reorientation, endo-atmospheric flight.
The reader will come away with a well-rounded understanding of the state-of-the-art in each space vehicle control application.
arXiv Detail & Related papers (2021-08-05T01:36:27Z) - DikpolaSat Mission: Improvement of Space Flight Performance and Optimal
Control Using Trained Deep Neural Network -- Trajectory Controller for Space
Objects Collision Avoidance [0.0]
This paper shows how the controller demonstration is carried out by having the spacecraft follow a desired path.
The obstacle avoidance algorithm is built into the control features to respond spontaneously using inputs from the neural network.
Multiple algorithms for optimizing flight controls and fuel consumption can be implemented using knowledge of flight dynamics in trajectory.
arXiv Detail & Related papers (2021-05-30T23:35:13Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z) - Real-Time Optimal Guidance and Control for Interplanetary Transfers
Using Deep Networks [10.191757341020216]
Imitation learning of optimal examples is used as a network training paradigm.
G&CNETs are suitable for an on-board, real-time, implementation of the optimal guidance and control system of the spacecraft.
arXiv Detail & Related papers (2020-02-20T23:37:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.