LLMSat: A Large Language Model-Based Goal-Oriented Agent for Autonomous Space Exploration
- URL: http://arxiv.org/abs/2405.01392v1
- Date: Sat, 13 Apr 2024 03:33:17 GMT
- Title: LLMSat: A Large Language Model-Based Goal-Oriented Agent for Autonomous Space Exploration
- Authors: David Maranto,
- Abstract summary: This work explores the application of Large Language Models (LLMs) as the high-level control system of a spacecraft.
A series of deep space mission scenarios simulated within the popular game engine Kerbal Space Program are used as case studies to evaluate the implementation against the requirements.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As spacecraft journey further from Earth with more complex missions, systems of greater autonomy and onboard intelligence are called for. Reducing reliance on human-based mission control becomes increasingly critical if we are to increase our rate of solar-system-wide exploration. Recent work has explored AI-based goal-oriented systems to increase the level of autonomy in mission execution. These systems make use of symbolic reasoning managers to make inferences from the state of a spacecraft and a handcrafted knowledge base, enabling autonomous generation of tasks and re-planning. Such systems have proven to be successful in controlled cases, but they are difficult to implement as they require human-crafted ontological models to allow the spacecraft to understand the world. Reinforcement learning has been applied to train robotic agents to pursue a goal. A new architecture for autonomy is called for. This work explores the application of Large Language Models (LLMs) as the high-level control system of a spacecraft. Using a systems engineering approach, this work presents the design and development of an agentic spacecraft controller by leveraging an LLM as a reasoning engine, to evaluate the utility of such an architecture in achieving higher levels of spacecraft autonomy. A series of deep space mission scenarios simulated within the popular game engine Kerbal Space Program (KSP) are used as case studies to evaluate the implementation against the requirements. It is shown the reasoning and planning abilities of present-day LLMs do not scale well as the complexity of a mission increases, but this can be alleviated with adequate prompting frameworks and strategic selection of the agent's level of authority over the host spacecraft. This research evaluates the potential of LLMs in augmenting autonomous decision-making systems for future robotic space applications.
Related papers
- Adversarial Machine Learning Threats to Spacecraft [1.837431956557716]
As reliance on autonomy grows, space vehicles will become increasingly vulnerable to attacks designed to disrupt autonomous processes.
This paper aims to elucidate and demonstrate the threats that adversarial machine learning (AML) capabilities pose to spacecraft.
arXiv Detail & Related papers (2024-05-14T02:42:40Z) - Plan-Seq-Learn: Language Model Guided RL for Solving Long Horizon Robotics Tasks [50.27313829438866]
Plan-Seq-Learn (PSL) is a modular approach that uses motion planning to bridge the gap between abstract language and learned low-level control.
PSL achieves success rates of over 85%, out-performing language-based, classical, and end-to-end approaches.
arXiv Detail & Related papers (2024-05-02T17:59:31Z) - Language Models are Spacecraft Operators [36.943670587532026]
Large Language Models (LLMs) are autonomous agents that take actions based on the content of the user text prompts.
We have developed a pure LLM-based solution for the Kerbal Space Program Differential Games (KSPDG) challenge.
arXiv Detail & Related papers (2024-03-30T16:43:59Z) - LLM4Drive: A Survey of Large Language Models for Autonomous Driving [62.10344445241105]
Large language models (LLMs) have demonstrated abilities including understanding context, logical reasoning, and generating answers.
In this paper, we systematically review a research line about textitLarge Language Models for Autonomous Driving (LLM4AD).
arXiv Detail & Related papers (2023-11-02T07:23:33Z) - Spacecraft Autonomous Decision-Planning for Collision Avoidance: a
Reinforcement Learning Approach [0.0]
This work proposes an implementation of autonomous CA decision-making capabilities on spacecraft based on Reinforcement Learning techniques.
The proposed framework considers imperfect monitoring information about the status of the debris in orbit and allows the AI system to effectively learn policies to perform accurate Collision Avoidance Maneuvers (CAMs)
The objective is to successfully delegate the decision-making process for autonomously implementing a CAM to the spacecraft without human intervention.
arXiv Detail & Related papers (2023-10-29T10:15:33Z) - Optimality Principles in Spacecraft Neural Guidance and Control [16.59877059263942]
We argue that end-to-end neural guidance and control architectures (here called G&CNets) allow transferring onboard the burden of acting upon optimality principles.
In this way, the sensor information is transformed in real time into optimal plans thus increasing the mission autonomy and robustness.
We discuss the main results obtained in training such neural architectures in simulation for interplanetary transfers, landings and close proximity operations.
arXiv Detail & Related papers (2023-05-22T14:48:58Z) - Assurance for Autonomy -- JPL's past research, lessons learned, and
future directions [56.32768279109502]
Autonomy is required when a wide variation in circumstances precludes responses being pre-planned.
Mission assurance is a key contributor to providing confidence, yet assurance practices honed over decades of spaceflight have relatively little experience with autonomy.
Researchers in JPL's software assurance group have been involved in the development of techniques specific to the assurance of autonomy.
arXiv Detail & Related papers (2023-05-16T18:24:12Z) - Enabling Astronaut Self-Scheduling using a Robust Advanced Modelling and
Scheduling system: an assessment during a Mars analogue mission [44.621922701019336]
We study the usage of a computer decision-support tool by a crew of analog astronauts.
The proposed tool, called Romie, belongs to the new category of Robust Advanced Modelling and Scheduling (RAMS) systems.
arXiv Detail & Related papers (2023-01-14T21:10:05Z) - Autonomous Aerial Robot for High-Speed Search and Intercept Applications [86.72321289033562]
A fully-autonomous aerial robot for high-speed object grasping has been proposed.
As an additional sub-task, our system is able to autonomously pierce balloons located in poles close to the surface.
Our approach has been validated in a challenging international competition and has shown outstanding results.
arXiv Detail & Related papers (2021-12-10T11:49:51Z) - Successor Feature Landmarks for Long-Horizon Goal-Conditioned
Reinforcement Learning [54.378444600773875]
We introduce Successor Feature Landmarks (SFL), a framework for exploring large, high-dimensional environments.
SFL drives exploration by estimating state-novelty and enables high-level planning by abstracting the state-space as a non-parametric landmark-based graph.
We show in our experiments on MiniGrid and ViZDoom that SFL enables efficient exploration of large, high-dimensional state spaces.
arXiv Detail & Related papers (2021-11-18T18:36:05Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.