Model Optimization for Deep Space Exploration via Simulators and Deep
Learning
- URL: http://arxiv.org/abs/2012.14092v1
- Date: Mon, 28 Dec 2020 04:36:09 GMT
- Title: Model Optimization for Deep Space Exploration via Simulators and Deep
Learning
- Authors: James Bird, Kellan Colburn, Linda Petzold, Philip Lubin
- Abstract summary: We explore the application of deep learning using neural networks to automate the detection of astronomical bodies.
The ability to acquire images, analyze them, and send back those that are important, is critical in bandwidth-limited applications.
We show that maximum achieved accuracy can hit above 98% for multiple model architectures, even with a relatively small training set.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning, and eventually true artificial intelligence techniques, are
extremely important advancements in astrophysics and astronomy. We explore the
application of deep learning using neural networks in order to automate the
detection of astronomical bodies for future exploration missions, such as
missions to search for signatures or suitability of life. The ability to
acquire images, analyze them, and send back those that are important, as
determined by the deep learning algorithm, is critical in bandwidth-limited
applications. Our previous foundational work solidified the concept of using
simulator images and deep learning in order to detect planets. Optimization of
this process is of vital importance, as even a small loss in accuracy might be
the difference between capturing and completely missing a possibly-habitable
nearby planet. Through computer vision, deep learning, and simulators, we
introduce methods that optimize the detection of exoplanets. We show that
maximum achieved accuracy can hit above 98% for multiple model architectures,
even with a relatively small training set.
Related papers
- Reward Finetuning for Faster and More Accurate Unsupervised Object
Discovery [64.41455104593304]
Reinforcement Learning from Human Feedback (RLHF) can improve machine learning models and align them with human preferences.
We propose to adapt similar RL-based methods to unsupervised object discovery.
We demonstrate that our approach is not only more accurate, but also orders of magnitudes faster to train.
arXiv Detail & Related papers (2023-10-29T17:03:12Z) - Comparing Active Learning Performance Driven by Gaussian Processes or
Bayesian Neural Networks for Constrained Trajectory Exploration [0.0]
Currently, humans drive robots to meet scientific objectives, but depending on the robot's location, the exchange of information and driving commands may cause undue delays in mission fulfillment.
An autonomous robot encoded with a scientific objective and an exploration strategy incurs no communication delays and can fulfill missions more quickly.
Active learning algorithms offer this capability of intelligent exploration, but the underlying model structure varies the performance of the active learning algorithm in accurately forming an understanding of the environment.
arXiv Detail & Related papers (2023-09-28T02:45:14Z) - Navigating to Objects in the Real World [76.1517654037993]
We present a large-scale empirical study of semantic visual navigation methods comparing methods from classical, modular, and end-to-end learning approaches.
We find that modular learning works well in the real world, attaining a 90% success rate.
In contrast, end-to-end learning does not, dropping from 77% simulation to 23% real-world success rate due to a large image domain gap between simulation and reality.
arXiv Detail & Related papers (2022-12-02T01:10:47Z) - DiffSkill: Skill Abstraction from Differentiable Physics for Deformable
Object Manipulations with Tools [96.38972082580294]
DiffSkill is a novel framework that uses a differentiable physics simulator for skill abstraction to solve deformable object manipulation tasks.
In particular, we first obtain short-horizon skills using individual tools from a gradient-based simulator.
We then learn a neural skill abstractor from the demonstration trajectories which takes RGBD images as input.
arXiv Detail & Related papers (2022-03-31T17:59:38Z) - A Spacecraft Dataset for Detection, Segmentation and Parts Recognition [42.27081423489484]
In this paper, we release a dataset for spacecraft detection, instance segmentation and part recognition.
The main contribution of this work is the development of the dataset using images of space stations and satellites.
We also provide evaluations with state-of-the-art methods in object detection and instance segmentation as a benchmark for the dataset.
arXiv Detail & Related papers (2021-06-15T14:36:56Z) - Actionable Models: Unsupervised Offline Reinforcement Learning of
Robotic Skills [93.12417203541948]
We propose the objective of learning a functional understanding of the environment by learning to reach any goal state in a given dataset.
We find that our method can operate on high-dimensional camera images and learn a variety of skills on real robots that generalize to previously unseen scenes and objects.
arXiv Detail & Related papers (2021-04-15T20:10:11Z) - Task-relevant Representation Learning for Networked Robotic Perception [74.0215744125845]
This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model's ultimate objective.
Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods.
arXiv Detail & Related papers (2020-11-06T07:39:08Z) - Detection of asteroid trails in Hubble Space Telescope images using Deep
Learning [0.0]
We present an application of Deep Learning for the image recognition of asteroid trails in single-exposure photos taken by the Hubble Space Telescope.
Using algorithms based on multi-layered deep Convolutional Neural Networks, we report accuracies of above 80% on the validation set.
arXiv Detail & Related papers (2020-10-29T09:03:18Z) - Low Dimensional State Representation Learning with Reward-shaped Priors [7.211095654886105]
We propose a method that aims at learning a mapping from the observations into a lower-dimensional state space.
This mapping is learned with unsupervised learning using loss functions shaped to incorporate prior knowledge of the environment and the task.
We test the method on several mobile robot navigation tasks in a simulation environment and also on a real robot.
arXiv Detail & Related papers (2020-07-29T13:00:39Z) - Learning Depth With Very Sparse Supervision [57.911425589947314]
This paper explores the idea that perception gets coupled to 3D properties of the world via interaction with the environment.
We train a specialized global-local network architecture with what would be available to a robot interacting with the environment.
Experiments on several datasets show that, when ground truth is available even for just one of the image pixels, the proposed network can learn monocular dense depth estimation up to 22.5% more accurately than state-of-the-art approaches.
arXiv Detail & Related papers (2020-03-02T10:44:13Z) - Advances in Deep Space Exploration via Simulators & Deep Learning [2.294014185517203]
StarLight program conceptualizes fast interstellar travel via small wafer satellites (wafersats)
Main goal of these wafer satellites is to gather useful images during their deep space journey.
Equipment fails and data rates are slow, thus we need a method to ensure that the most important images to humankind are the ones that are prioritized for data transfer.
We introduce simulator-based methods that leverage artificial intelligence, mostly in the form of computer vision, in order to solve all three of these issues.
arXiv Detail & Related papers (2020-02-10T19:07:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.