Few-shot Scooping Under Domain Shift via Simulated Maximal Deployment Gaps
- URL: http://arxiv.org/abs/2408.02949v1
- Date: Tue, 6 Aug 2024 04:25:09 GMT
- Title: Few-shot Scooping Under Domain Shift via Simulated Maximal Deployment Gaps
- Authors: Yifan Zhu, Pranay Thangeda, Erica L Tevere, Ashish Goel, Erik Kramer, Hari D Nayar, Melkior Ornik, Kris Hauser,
- Abstract summary: This paper studies the few-shot scooping problem and proposes a vision-based adaptive scooping strategy.
We train a deep kernel model to adapt to large domain shifts by creating simulated deployment gaps from an offline training dataset.
The proposed method also demonstrates zero-shot transfer capability, successfully adapting to the NASA OWLAT platform.
- Score: 25.102403059931184
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous lander missions on extraterrestrial bodies need to sample granular materials while coping with domain shifts, even when sampling strategies are extensively tuned on Earth. To tackle this challenge, this paper studies the few-shot scooping problem and proposes a vision-based adaptive scooping strategy that uses the deep kernel Gaussian process method trained with a novel meta-training strategy to learn online from very limited experience on out-of-distribution target terrains. Our Deep Kernel Calibration with Maximal Deployment Gaps (kCMD) strategy explicitly trains a deep kernel model to adapt to large domain shifts by creating simulated maximal deployment gaps from an offline training dataset and training models to overcome these deployment gaps during training. Employed in a Bayesian Optimization sequential decision-making framework, the proposed method allows the robot to perform high-quality scooping actions on out-of-distribution terrains after a few attempts, significantly outperforming non-adaptive methods proposed in the excavation literature as well as other state-of-the-art meta-learning methods. The proposed method also demonstrates zero-shot transfer capability, successfully adapting to the NASA OWLAT platform, which serves as a state-of-the-art simulator for potential future planetary missions. These results demonstrate the potential of training deep models with simulated deployment gaps for more generalizable meta-learning in high-capacity models. Furthermore, they highlight the promise of our method in autonomous lander sampling missions by enabling landers to overcome the deployment gap between Earth and extraterrestrial bodies.
Related papers
- Adaptive teachers for amortized samplers [76.88721198565861]
Amortized inference is the task of training a parametric model, such as a neural network, to approximate a distribution with a given unnormalized density where exact sampling is intractable.
Off-policy RL training facilitates the discovery of diverse, high-reward candidates, but existing methods still face challenges in efficient exploration.
We propose an adaptive training distribution (the Teacher) to guide the training of the primary amortized sampler (the Student) by prioritizing high-loss regions.
arXiv Detail & Related papers (2024-10-02T11:33:13Z) - Federated Multi-Agent Mapping for Planetary Exploration [0.4143603294943439]
We propose an approach to jointly train a centralized map model across agents without the need to share raw data.
Our approach leverages implicit neural mapping to generate parsimonious and adaptable representations.
We demonstrate the efficacy of our proposed federated mapping approach using Martian terrains and glacier datasets.
arXiv Detail & Related papers (2024-04-02T20:32:32Z) - Towards a Machine Learning-Based Approach to Predict Space Object
Density Distributions [0.7652747219811166]
Current models for examining Anthropogenic Space Objects (ASOs) are computationally demanding.
We propose a novel machine learning-based model, as an extension of the MIT Orbital Capacity Tool (MOCAT)
We study how different deep learning-based solutions can potentially be good candidates for ASO propagation and manage the high-dimensionality of the data.
arXiv Detail & Related papers (2024-01-08T19:43:30Z) - Reparameterized Policy Learning for Multimodal Trajectory Optimization [61.13228961771765]
We investigate the challenge of parametrizing policies for reinforcement learning in high-dimensional continuous action spaces.
We propose a principled framework that models the continuous RL policy as a generative model of optimal trajectories.
We present a practical model-based RL method, which leverages the multimodal policy parameterization and learned world model.
arXiv Detail & Related papers (2023-07-20T09:05:46Z) - Robust Model-Based Optimization for Challenging Fitness Landscapes [96.63655543085258]
Protein design involves optimization on a fitness landscape.
Leading methods are challenged by sparsity of high-fitness samples in the training set.
We show that this problem of "separation" in the design space is a significant bottleneck in existing model-based optimization tools.
We propose a new approach that uses a novel VAE as its search model to overcome the problem.
arXiv Detail & Related papers (2023-05-23T03:47:32Z) - Learning Representative Trajectories of Dynamical Systems via
Domain-Adaptive Imitation [0.0]
We propose DATI, a deep reinforcement learning agent designed for domain-adaptive trajectory imitation.
Our experiments show that DATI outperforms baseline methods for imitation learning and optimal control in this setting.
Its generalization to a real-world scenario is shown through the discovery of abnormal motion patterns in maritime traffic.
arXiv Detail & Related papers (2023-04-19T15:53:48Z) - Multitask Adaptation by Retrospective Exploration with Learned World
Models [77.34726150561087]
We propose a meta-learned addressing model called RAMa that provides training samples for the MBRL agent taken from task-agnostic storage.
The model is trained to maximize the expected agent's performance by selecting promising trajectories solving prior tasks from the storage.
arXiv Detail & Related papers (2021-10-25T20:02:57Z) - Reinforcement Learning for Low-Thrust Trajectory Design of
Interplanetary Missions [77.34726150561087]
This paper investigates the use of reinforcement learning for the robust design of interplanetary trajectories in presence of severe disturbances.
An open-source implementation of the state-of-the-art algorithm Proximal Policy Optimization is adopted.
The resulting Guidance and Control Network provides both a robust nominal trajectory and the associated closed-loop guidance law.
arXiv Detail & Related papers (2020-08-19T15:22:15Z) - Localized active learning of Gaussian process state space models [63.97366815968177]
A globally accurate model is not required to achieve good performance in many common control applications.
We propose an active learning strategy for Gaussian process state space models that aims to obtain an accurate model on a bounded subset of the state-action space.
By employing model predictive control, the proposed technique integrates information collected during exploration and adaptively improves its exploration strategy.
arXiv Detail & Related papers (2020-05-04T05:35:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.