Robotic Testbed for Rendezvous and Optical Navigation: Multi-Source
Calibration and Machine Learning Use Cases
- URL: http://arxiv.org/abs/2108.05529v1
- Date: Thu, 12 Aug 2021 04:38:50 GMT
- Title: Robotic Testbed for Rendezvous and Optical Navigation: Multi-Source
Calibration and Machine Learning Use Cases
- Authors: Tae Ha Park, Juergen Bosse, Simone D'Amico
- Abstract summary: This work presents the most recent advances of the Robotic Testbed for Rendezvous and Optical Navigation (TRON) at Stanford University.
The TRON facility consists of two 6 degrees-of-freedom KUKA robot arms and a set of Vicon motion track cameras to reconfigure an arbitrary relative pose between a camera and a target mockup model.
A comparative analysis of the synthetic and TRON simulated imageries is performed using a Convolutional Neural Network (CNN) pre-trained on the synthetic images.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This work presents the most recent advances of the Robotic Testbed for
Rendezvous and Optical Navigation (TRON) at Stanford University - the first
robotic testbed capable of validating machine learning algorithms for
spaceborne optical navigation. The TRON facility consists of two 6
degrees-of-freedom KUKA robot arms and a set of Vicon motion track cameras to
reconfigure an arbitrary relative pose between a camera and a target mockup
model. The facility includes multiple Earth albedo light boxes and a sun lamp
to recreate the high-fidelity spaceborne illumination conditions. After the
overview of the facility, this work details the multi-source calibration
procedure which enables the estimation of the relative pose between the object
and the camera with millimeter-level position and millidegree-level orientation
accuracies. Finally, a comparative analysis of the synthetic and TRON simulated
imageries is performed using a Convolutional Neural Network (CNN) pre-trained
on the synthetic images. The result shows a considerable gap in the CNN's
performance, suggesting the TRON simulated images can be used to validate the
robustness of any machine learning algorithms trained on more easily accessible
synthetic imagery from computer graphics.
Related papers
- RETINA: a hardware-in-the-loop optical facility with reduced optical aberrations [0.0]
Vision-based navigation algorithms have established themselves as effective solutions to determine the spacecraft state in orbit with low-cost and versatile sensors.
A dedicated simulation framework must be developed to emulate the orbital environment in a laboratory setup.
This paper presents the design of a low-aberration optical facility called RETINA to perform this task.
arXiv Detail & Related papers (2024-07-02T11:26:37Z) - Neural Scene Representation for Locomotion on Structured Terrain [56.48607865960868]
We propose a learning-based method to reconstruct the local terrain for a mobile robot traversing urban environments.
Using a stream of depth measurements from the onboard cameras and the robot's trajectory, the estimates the topography in the robot's vicinity.
We propose a 3D reconstruction model that faithfully reconstructs the scene, despite the noisy measurements and large amounts of missing data coming from the blind spots of the camera arrangement.
arXiv Detail & Related papers (2022-06-16T10:45:17Z) - Deep Learning for Real Time Satellite Pose Estimation on Low Power Edge
TPU [58.720142291102135]
In this paper we propose a pose estimation software exploiting neural network architectures.
We show how low power machine learning accelerators could enable Artificial Intelligence exploitation in space.
arXiv Detail & Related papers (2022-04-07T08:53:18Z) - Enhanced Frame and Event-Based Simulator and Event-Based Video
Interpolation Network [1.4095425725284465]
We present a new, advanced event simulator that can produce realistic scenes recorded by a camera rig with an arbitrary number of sensors located at fixed offsets.
It includes a new frame-based image sensor model with realistic image quality reduction effects, and an extended DVS model with more accurate characteristics.
We show that data generated by our simulator can be used to train our new model, leading to reconstructed images on public datasets of equivalent or better quality than the state of the art.
arXiv Detail & Related papers (2021-12-17T08:27:13Z) - CNN-based Omnidirectional Object Detection for HermesBot Autonomous
Delivery Robot with Preliminary Frame Classification [53.56290185900837]
We propose an algorithm for optimizing a neural network for object detection using preliminary binary frame classification.
An autonomous mobile robot with 6 rolling-shutter cameras on the perimeter providing a 360-degree field of view was used as the experimental setup.
arXiv Detail & Related papers (2021-10-22T15:05:37Z) - Generative Modelling of BRDF Textures from Flash Images [50.660026124025265]
We learn a latent space for easy capture, semantic editing, consistent, and efficient reproduction of visual material appearance.
In a second step, conditioned on the material code, our method produces an infinite and diverse spatial field of BRDF model parameters.
arXiv Detail & Related papers (2021-02-23T18:45:18Z) - Nothing But Geometric Constraints: A Model-Free Method for Articulated
Object Pose Estimation [89.82169646672872]
We propose an unsupervised vision-based system to estimate the joint configurations of the robot arm from a sequence of RGB or RGB-D images without knowing the model a priori.
We combine a classical geometric formulation with deep learning and extend the use of epipolar multi-rigid-body constraints to solve this task.
arXiv Detail & Related papers (2020-11-30T20:46:48Z) - Multimodal Material Classification for Robots using Spectroscopy and
High Resolution Texture Imaging [14.458436940557924]
We present a multimodal sensing technique, leveraging near-infrared spectroscopy and close-range high resolution texture imaging.
We show that this representation enables a robot to recognize materials with greater performance as compared to prior state-of-the-art approaches.
arXiv Detail & Related papers (2020-04-02T17:33:54Z) - Two-shot Spatially-varying BRDF and Shape Estimation [89.29020624201708]
We propose a novel deep learning architecture with a stage-wise estimation of shape and SVBRDF.
We create a large-scale synthetic training dataset with domain-randomized geometry and realistic materials.
Experiments on both synthetic and real-world datasets show that our network trained on a synthetic dataset can generalize well to real-world images.
arXiv Detail & Related papers (2020-04-01T12:56:13Z) - Real-Time Object Detection and Recognition on Low-Compute Humanoid
Robots using Deep Learning [0.12599533416395764]
We describe a novel architecture that enables multiple low-compute NAO robots to perform real-time detection, recognition and localization of objects in its camera view.
The proposed algorithm for object detection and localization is an empirical modification of YOLOv3, based on indoor experiments in multiple scenarios.
The architecture also comprises of an effective end-to-end pipeline to feed the real-time frames from the camera feed to the neural net and use its results for guiding the robot.
arXiv Detail & Related papers (2020-01-20T05:24:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.