An integrated process for design and control of lunar robotics using AI and simulation
- URL: http://arxiv.org/abs/2509.12367v1
- Date: Mon, 15 Sep 2025 19:02:30 GMT
- Title: An integrated process for design and control of lunar robotics using AI and simulation
- Authors: Daniel Lindmark, Jonas Andersson, Kenneth Bodin, Tora Bodin, Hugo Börjesson, Fredrik Nordfeldth, Martin Servin,
- Abstract summary: We envision an integrated process for developing lunar construction equipment, where physical design and control are explored in parallel.<n>We describe a technical framework that supports this process.<n>It relies on OpenPLX, a readable/writable declarative language that links CAD-models and autonomous systems to real-time 3D simulations of contacting multibody dynamics, machine regolith interaction forces, and non-ideal sensors.
- Score: 0.48861336570452174
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We envision an integrated process for developing lunar construction equipment, where physical design and control are explored in parallel. In this paper, we describe a technical framework that supports this process. It relies on OpenPLX, a readable/writable declarative language that links CAD-models and autonomous systems to high-fidelity, real-time 3D simulations of contacting multibody dynamics, machine regolith interaction forces, and non-ideal sensors. To demonstrate its capabilities, we present two case studies, including an autonomous lunar rover that combines a vision-language model for navigation with a reinforcement learning-based control policy for locomotion.
Related papers
- A Unified Experimental Architecture for Informative Path Planning: from Simulation to Deployment with GuadalPlanner [69.43049144653882]
This paper introduces a unified architecture that decouples high-level decision-making from vehicle-specific control.<n>The proposed architecture is realized through GuadalPlanner, which defines standardized interfaces between planning, sensing, and vehicle execution.
arXiv Detail & Related papers (2026-02-11T10:02:31Z) - Causal World Modeling for Robot Control [56.31803788587547]
Video world models provide the ability to imagine the near future by understanding the causality between actions and visual dynamics.<n>We introduce LingBot-VA, an autoregressive diffusion framework that learns frame prediction and policy execution simultaneously.<n>We evaluate our model on both simulation benchmarks and real-world scenarios, where it shows significant promise in long-horizon manipulation, data efficiency in post-training, and strong generalizability to novel configurations.
arXiv Detail & Related papers (2026-01-29T17:07:43Z) - Seismology modeling agent: A smart assistant for geophysical researchers [14.28965530601497]
This paper proposes an intelligent, interactive workflow powered by Large Language Models (LLMs)<n>We introduce the first Model Context Protocol (MCP) server suite for SPECFEM.<n>The framework supports both fully automated execution and human-in-the-loop collaboration.
arXiv Detail & Related papers (2025-12-16T14:18:26Z) - GeoDrive: 3D Geometry-Informed Driving World Model with Precise Action Control [50.67481583744243]
We introduce GeoDrive, which explicitly integrates robust 3D geometry conditions into driving world models.<n>We propose a dynamic editing module during training to enhance the renderings by editing the positions of the vehicles.<n>Our method significantly outperforms existing models in both action accuracy and 3D spatial awareness.
arXiv Detail & Related papers (2025-05-28T14:46:51Z) - FalconWing: An Open-Source Platform for Ultra-Light Fixed-Wing Aircraft Research [2.823704956886882]
FalconWing is an open-source, ultra-lightweight (150 g) fixed-wing platform for autonomy research.<n>We develop and deploy a vision-based control policy for autonomous landing using a novel real-to-sim-to-real learning approach.<n>When deployed zero-shot on the hardware platform, this policy achieves an 80% success rate in vision-based autonomous landings.
arXiv Detail & Related papers (2025-05-02T16:47:05Z) - Cosmos-Transfer1: Conditional World Generation with Adaptive Multimodal Control [97.98560001760126]
We introduce Cosmos-Transfer, a conditional world generation model that can generate world simulations based on multiple spatial control inputs.<n>We conduct evaluations to analyze the proposed model and demonstrate its applications for Physical AI, including robotics2Real and autonomous vehicle data enrichment.
arXiv Detail & Related papers (2025-03-18T17:57:54Z) - Autonomous Reality Modelling for Cultural Heritage Sites employing
cooperative quadrupedal robots and unmanned aerial vehicles [0.0]
This paper introduces a novel methodology for autonomous 3D Reality Modeling for CH monuments by employing au-tonomous biomimetic quadrupedal robotic agents and UAVs equipped with the appropriate sensors.
The outcomes of this automated process may find applications in digital twin platforms, facilitating secure monitoring and management of cultural heritage sites and spaces.
arXiv Detail & Related papers (2024-02-20T08:08:07Z) - Toward a Plug-and-Play Vision-Based Grasping Module for Robotics [0.0]
This paper introduces a vision-based grasping framework that can easily be transferred across multiple manipulators.
The framework generates diverse repertoires of open-loop grasping trajectories, enhancing adaptability while maintaining a diversity of grasps.
arXiv Detail & Related papers (2023-10-06T16:16:00Z) - DeepIPC: Deeply Integrated Perception and Control for an Autonomous Vehicle in Real Environments [7.642646077340124]
We introduce DeepIPC, a novel end-to-end model tailored for autonomous driving.
DeepIPC seamlessly integrates perception and control tasks.
Our evaluation demonstrates DeepIPC's superior performance in terms of drivability and multi-task efficiency.
arXiv Detail & Related papers (2022-07-20T14:20:35Z) - Physics-Informed Bayesian Learning of Electrohydrodynamic Polymer Jet
Printing Dynamics [2.9641522758725016]
GPJet is an end-to-end physics-informed Bayesian learning framework.
It can extract high-fidelity jet features in real-time from video data.
It can act as closed-loop sensory feedback to the Machine Learning module of high- and low-fidelity data.
arXiv Detail & Related papers (2022-04-16T03:29:27Z) - VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and
Policy Learning for Autonomous Vehicles [131.2240621036954]
We present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles.
Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras.
We demonstrate the ability to train and test perception-to-control policies across each of the sensor types and showcase the power of this approach via deployment on a full scale autonomous vehicle.
arXiv Detail & Related papers (2021-11-23T18:58:10Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - Bidirectional Interaction between Visual and Motor Generative Models
using Predictive Coding and Active Inference [68.8204255655161]
We propose a neural architecture comprising a generative model for sensory prediction, and a distinct generative model for motor trajectories.
We highlight how sequences of sensory predictions can act as rails guiding learning, control and online adaptation of motor trajectories.
arXiv Detail & Related papers (2021-04-19T09:41:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.