Advances in Deep Space Exploration via Simulators & Deep Learning
- URL: http://arxiv.org/abs/2002.04051v2
- Date: Sat, 6 Jun 2020 19:21:36 GMT
- Title: Advances in Deep Space Exploration via Simulators & Deep Learning
- Authors: James Bird, Linda Petzold, Philip Lubin, Julia Deacon
- Abstract summary: StarLight program conceptualizes fast interstellar travel via small wafer satellites (wafersats)
Main goal of these wafer satellites is to gather useful images during their deep space journey.
Equipment fails and data rates are slow, thus we need a method to ensure that the most important images to humankind are the ones that are prioritized for data transfer.
We introduce simulator-based methods that leverage artificial intelligence, mostly in the form of computer vision, in order to solve all three of these issues.
- Score: 2.294014185517203
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The StarLight program conceptualizes fast interstellar travel via small wafer
satellites (wafersats) that are propelled by directed energy. This process is
wildly different from traditional space travel and trades large and slow
spacecraft for small, fast, inexpensive, and fragile ones. The main goal of
these wafer satellites is to gather useful images during their deep space
journey. We introduce and solve some of the main problems that accompany this
concept. First, we need an object detection system that can detect planets that
we have never seen before, some containing features that we may not even know
exist in the universe. Second, once we have images of exoplanets, we need a way
to take these images and rank them by importance. Equipment fails and data
rates are slow, thus we need a method to ensure that the most important images
to humankind are the ones that are prioritized for data transfer. Finally, the
energy on board is minimal and must be conserved and used sparingly. No
exoplanet images should be missed, but using energy erroneously would be
detrimental. We introduce simulator-based methods that leverage artificial
intelligence, mostly in the form of computer vision, in order to solve all
three of these issues. Our results confirm that simulators provide an extremely
rich training environment that surpasses that of real images, and can be used
to train models on features that have yet to be observed by humans. We also
show that the immersive and adaptable environment provided by the simulator,
combined with deep learning, lets us navigate and save energy in an otherwise
implausible way.
Related papers
- Learning autonomous driving from aerial imagery [67.06858775696453]
Photogrammetric simulators allow the synthesis of novel views through the transformation of pre-generated assets into novel views.
We use a Neural Radiance Field (NeRF) as an intermediate representation to synthesize novel views from the point of view of a ground vehicle.
arXiv Detail & Related papers (2024-10-18T05:09:07Z) - Legged Locomotion in Challenging Terrains using Egocentric Vision [70.37554680771322]
We present the first end-to-end locomotion system capable of traversing stairs, curbs, stepping stones, and gaps.
We show this result on a medium-sized quadruped robot using a single front-facing depth camera.
arXiv Detail & Related papers (2022-11-14T18:59:58Z) - Deep Learning for Real Time Satellite Pose Estimation on Low Power Edge
TPU [58.720142291102135]
In this paper we propose a pose estimation software exploiting neural network architectures.
We show how low power machine learning accelerators could enable Artificial Intelligence exploitation in space.
arXiv Detail & Related papers (2022-04-07T08:53:18Z) - NavDreams: Towards Camera-Only RL Navigation Among Humans [35.57943738219839]
We investigate whether the world model concept, which has shown results for modeling and learning policies in Atari games, can also be applied to the camera-based navigation problem.
We create simulated environments where a robot must navigate past static and moving humans without colliding in order to reach its goal.
We find that state-of-the-art methods are able to achieve success in solving the navigation problem, and can generate dream-like predictions of future image-sequences.
arXiv Detail & Related papers (2022-03-23T09:46:44Z) - SPEED+: Next Generation Dataset for Spacecraft Pose Estimation across
Domain Gap [0.9449650062296824]
This paper introduces SPEED+: the next generation spacecraft pose estimation dataset with specific emphasis on domain gap.
SPEED+ includes 9,531 simulated images of a spacecraft mockup model captured from the Testbed for Rendezvous and Optical Navigation (TRON) facility.
TRON is a first-of-a-kind robotic testbed capable of capturing an arbitrary number of target images with accurate and maximally diverse pose labels.
arXiv Detail & Related papers (2021-10-06T23:22:24Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - Image simulation for space applications with the SurRender software [0.0]
We explain why traditional rendering engines may present limitations that are potentially critical for space applications.
We introduce Airbus SurRender software v7 and provide details on features that make it a very powerful space image simulator.
arXiv Detail & Related papers (2021-06-21T18:00:01Z) - Model Optimization for Deep Space Exploration via Simulators and Deep
Learning [0.0]
We explore the application of deep learning using neural networks to automate the detection of astronomical bodies.
The ability to acquire images, analyze them, and send back those that are important, is critical in bandwidth-limited applications.
We show that maximum achieved accuracy can hit above 98% for multiple model architectures, even with a relatively small training set.
arXiv Detail & Related papers (2020-12-28T04:36:09Z) - Point Cloud Based Reinforcement Learning for Sim-to-Real and Partial
Observability in Visual Navigation [62.22058066456076]
Reinforcement Learning (RL) represents powerful tools to solve complex robotic tasks.
RL does not work directly in the real-world, which is known as the sim-to-real transfer problem.
We propose a method that learns on an observation space constructed by point clouds and environment randomization.
arXiv Detail & Related papers (2020-07-27T17:46:59Z) - Visual Navigation Among Humans with Optimal Control as a Supervisor [72.5188978268463]
We propose an approach that combines learning-based perception with model-based optimal control to navigate among humans.
Our approach is enabled by our novel data-generation tool, HumANav.
We demonstrate that the learned navigation policies can anticipate and react to humans without explicitly predicting future human motion.
arXiv Detail & Related papers (2020-03-20T16:13:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.