Robotic Vision for Space Mining
- URL: http://arxiv.org/abs/2109.12109v2
- Date: Thu, 30 Sep 2021 01:47:49 GMT
- Title: Robotic Vision for Space Mining
- Authors: Ragav Sachdeva, Ravi Hammond, James Bockman, Alec Arthur, Brandon
Smart, Dustin Craggs, Anh-Dzung Doan, Thomas Rowntree, Elijah Schutz, Adrian
Orenstein, Andy Yu, Tat-Jun Chin, Ian Reid
- Abstract summary: We show how machine learning-enabled vision could help alleviate the challenges posed by the lunar environment.
A robust multi-robot coordinator was also developed to achieve long-term operation and effective collaboration between robots.
- Score: 32.2999577099258
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Future Moon bases will likely be constructed using resources mined from the
surface of the Moon. The difficulty of maintaining a human workforce on the
Moon and communications lag with Earth means that mining will need to be
conducted using collaborative robots with a high degree of autonomy. In this
paper, we explore the utility of robotic vision towards addressing several
major challenges in autonomous mining in the lunar environment: lack of
satellite positioning systems, navigation in hazardous terrain, and delicate
robot interactions. Specifically, we describe and report the results of robotic
vision algorithms that we developed for Phase 2 of the NASA Space Robotics
Challenge, which was framed in the context of autonomous collaborative robots
for mining on the Moon. The competition provided a simulated lunar environment
that exhibits the complexities alluded to above. We show how machine
learning-enabled vision could help alleviate the challenges posed by the lunar
environment. A robust multi-robot coordinator was also developed to achieve
long-term operation and effective collaboration between robots.
Related papers
- Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - We Choose to Go to Space: Agent-driven Human and Multi-Robot
Collaboration in Microgravity [28.64243893838686]
Future space exploration requires humans and robots to work together.
We present SpaceAgents-1, a system for learning human and multi-robot collaboration strategies under microgravity conditions.
arXiv Detail & Related papers (2024-02-22T05:32:27Z) - Enabling Astronaut Self-Scheduling using a Robust Advanced Modelling and
Scheduling system: an assessment during a Mars analogue mission [44.621922701019336]
We study the usage of a computer decision-support tool by a crew of analog astronauts.
The proposed tool, called Romie, belongs to the new category of Robust Advanced Modelling and Scheduling (RAMS) systems.
arXiv Detail & Related papers (2023-01-14T21:10:05Z) - Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human
Supervision [72.4735163268491]
Commercial and industrial deployments of robot fleets often fall back on remote human teleoperators during execution.
We formalize the Interactive Fleet Learning (IFL) setting, in which multiple robots interactively query and learn from multiple human supervisors.
We propose Fleet-DAgger, a family of IFL algorithms, and compare a novel Fleet-DAgger algorithm to 4 baselines in simulation.
arXiv Detail & Related papers (2022-06-29T01:23:57Z) - Design and Simulation of an Autonomous Quantum Flying Robot Vehicle: An
IBM Quantum Experience [0.0]
Quantum phenomena in robotics make sure that the robots occupy less space and the ability of quantum computation to process the huge amount of information effectively.
We propose a quantum robot vehicle that is smart' enough to understand the complex situations more than that of a simple Braitenberg vehicle.
arXiv Detail & Related papers (2022-06-01T00:08:41Z) - Socially Compliant Navigation Dataset (SCAND): A Large-Scale Dataset of
Demonstrations for Social Navigation [92.66286342108934]
Social navigation is the capability of an autonomous agent, such as a robot, to navigate in a'socially compliant' manner in the presence of other intelligent agents such as humans.
Our dataset contains 8.7 hours, 138 trajectories, 25 miles of socially compliant, human teleoperated driving demonstrations.
arXiv Detail & Related papers (2022-03-28T19:09:11Z) - Autonomous Aerial Robot for High-Speed Search and Intercept Applications [86.72321289033562]
A fully-autonomous aerial robot for high-speed object grasping has been proposed.
As an additional sub-task, our system is able to autonomously pierce balloons located in poles close to the surface.
Our approach has been validated in a challenging international competition and has shown outstanding results.
arXiv Detail & Related papers (2021-12-10T11:49:51Z) - Dual-Arm Adversarial Robot Learning [0.6091702876917281]
We propose dual-arm settings as platforms for robot learning.
We will discuss the potential benefits of this setup as well as the challenges and research directions that can be pursued.
arXiv Detail & Related papers (2021-10-15T12:51:57Z) - Co-Evolution of Multi-Robot Controllers and Task Cues for Off-World Open
Pit Mining [0.6091702876917281]
This paper presents a novel method for developing scalable controllers for use in multi-robot excavation and site-preparation scenarios.
The controller starts with a blank slate and does not require human-authored operations scripts nor detailed modeling of the kinematics and dynamics of the excavator.
In this paper, we explore the use of templates and task cues to improve group performance further and minimize antagonism.
arXiv Detail & Related papers (2020-09-19T03:13:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.