Lessons from a Space Lab -- An Image Acquisition Perspective
- URL: http://arxiv.org/abs/2208.08865v1
- Date: Thu, 18 Aug 2022 14:44:40 GMT
- Title: Lessons from a Space Lab -- An Image Acquisition Perspective
- Authors: Leo Pauly, Michele Lynn Jamrozik, Miguel Ortiz Del Castillo, Olivia
Borgue, Inder Pal Singh, Mohatashem Reyaz Makhdoomi, Olga-Orsalia
Christidi-Loumpasefski, Vincent Gaudilliere, Carol Martinez, Arunkumar
Rathinam, Andreas Hein, Miguel Olivares Mendez, Djamila Aouada
- Abstract summary: Interdisciplinary Center of Security, Reliability and Trust (SnT) at the University of Luxembourg has developed the 'SnT Zero-G Lab'
This article presents a systematic approach combining market survey and experimental analyses for equipment selection.
- Score: 7.2090712455329635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The use of Deep Learning (DL) algorithms has improved the performance of
vision-based space applications in recent years. However, generating large
amounts of annotated data for training these DL algorithms has proven
challenging. While synthetically generated images can be used, the DL models
trained on synthetic data are often susceptible to performance degradation,
when tested in real-world environments. In this context, the Interdisciplinary
Center of Security, Reliability and Trust (SnT) at the University of Luxembourg
has developed the 'SnT Zero-G Lab', for training and validating vision-based
space algorithms in conditions emulating real-world space environments. An
important aspect of the SnT Zero-G Lab development was the equipment selection.
From the lessons learned during the lab development, this article presents a
systematic approach combining market survey and experimental analyses for
equipment selection. In particular, the article focus on the image acquisition
equipment in a space lab: background materials, cameras and illumination lamps.
The results from the experiment analyses show that the market survey
complimented by experimental analyses is required for effective equipment
selection in a space lab development project.
Related papers
- LCDC: Bridging Science and Machine Learning for Light Curve Analysis [0.0]
Python-based toolkit enables preprocessing, analysis, and machine learning applications of light curve data.
First standardized dataset for rocket body classification, RoBo6, used to train and evaluate several benchmark machine learning models.
These use cases highlight LCDC's potential to advance space debris characterization and promote sustainable space exploration.
arXiv Detail & Related papers (2025-04-14T07:50:55Z) - Extrapolated Urban View Synthesis Benchmark [53.657271730352214]
Photo simulators are essential for the training and evaluation of vision-centric autonomous vehicles (AVs)
At their core is Novel View Synthesis (NVS), a capability that generates diverse unseen viewpoints to accommodate the broad and continuous pose distribution of AVs.
Recent advances in radiance fields, such as 3D Gaussian Splatting, achieve photorealistic rendering at real-time speeds and have been widely used in modeling large-scale driving scenes.
We will release the data to help advance self-driving and urban robotics simulation technology.
arXiv Detail & Related papers (2024-12-06T18:41:39Z) - Exploring Fully Convolutional Networks for the Segmentation of Hyperspectral Imaging Applied to Advanced Driver Assistance Systems [1.8874331450711404]
We explore the use of hyperspectral imaging (HSI) in Advanced Driver Assistance Systems (ADAS)
This paper describes some experimental results of the application of fully convolutional networks (FCN) to the image segmentation of HSI for ADAS applications.
We use the HSI-Drive v1.1 dataset, which provides a set of labelled images recorded in real driving conditions with a small-size snapshot NIR-HSI camera.
arXiv Detail & Related papers (2024-12-05T08:58:25Z) - A Deep Learning Approach for Pixel-level Material Classification via Hyperspectral Imaging [1.294249882472766]
Hyperspectral (HS) imaging offers advantages over conventional technologies such as X-ray fluorescence and Raman spectroscopy.
This study evaluates the potential of combining HS imaging with deep learning for material characterization.
The model achieved 99.94% classification accuracy, demonstrating robustness in color, size, and shape invariance, and effectively handling material overlap.
arXiv Detail & Related papers (2024-09-20T13:38:48Z) - Training Datasets Generation for Machine Learning: Application to Vision Based Navigation [0.0]
Vision Based Navigation consists in utilizing cameras as precision sensors for GNC after extracting information from images.
To enable the adoption of machine learning for space applications, one of obstacles is the demonstration that available training datasets are adequate to validate the algorithms.
The objective of the study is to generate datasets of images and metadata suitable for training machine learning algorithms.
arXiv Detail & Related papers (2024-09-17T17:34:24Z) - Contrasting Deepfakes Diffusion via Contrastive Learning and Global-Local Similarities [88.398085358514]
Contrastive Deepfake Embeddings (CoDE) is a novel embedding space specifically designed for deepfake detection.
CoDE is trained via contrastive learning by additionally enforcing global-local similarities.
arXiv Detail & Related papers (2024-07-29T18:00:10Z) - LiveHPS: LiDAR-based Scene-level Human Pose and Shape Estimation in Free
Environment [59.320414108383055]
We present LiveHPS, a novel single-LiDAR-based approach for scene-level human pose and shape estimation.
We propose a huge human motion dataset, named FreeMotion, which is collected in various scenarios with diverse human poses.
arXiv Detail & Related papers (2024-02-27T03:08:44Z) - Space Debris: Are Deep Learning-based Image Enhancements part of the
Solution? [9.117415383776695]
The volume of space debris currently orbiting the Earth is reaching an unsustainable level at an accelerated pace.
The detection, tracking, identification, and differentiation between orbit-defined, registered spacecraft, and rogue/inactive space objects'', is critical to asset protection.
The primary objective of this work is to investigate the validity of Deep Neural Network (DNN) solutions to overcome the limitations and image artefacts most prevalent when captured with monocular cameras in the visible light spectrum.
arXiv Detail & Related papers (2023-08-01T09:38:41Z) - ChemVise: Maximizing Out-of-Distribution Chemical Detection with the
Novel Application of Zero-Shot Learning [60.02503434201552]
This research proposes learning approximations of complex exposures from training sets of simple ones.
We demonstrate this approach to synthetic sensor responses surprisingly improves the detection of out-of-distribution obscured chemical analytes.
arXiv Detail & Related papers (2023-02-09T20:19:57Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - Optical flow-based branch segmentation for complex orchard environments [73.11023209243326]
We train a neural network system in simulation only using simulated RGB data and optical flow.
This resulting neural network is able to perform foreground segmentation of branches in a busy orchard environment without additional real-world training or using any special setup or equipment beyond a standard camera.
Our results show that our system is highly accurate and, when compared to a network using manually labeled RGBD data, achieves significantly more consistent and robust performance across environments that differ from the training set.
arXiv Detail & Related papers (2022-02-26T03:38:20Z) - SPEED+: Next Generation Dataset for Spacecraft Pose Estimation across
Domain Gap [0.9449650062296824]
This paper introduces SPEED+: the next generation spacecraft pose estimation dataset with specific emphasis on domain gap.
SPEED+ includes 9,531 simulated images of a spacecraft mockup model captured from the Testbed for Rendezvous and Optical Navigation (TRON) facility.
TRON is a first-of-a-kind robotic testbed capable of capturing an arbitrary number of target images with accurate and maximally diverse pose labels.
arXiv Detail & Related papers (2021-10-06T23:22:24Z) - Salient Objects in Clutter [130.63976772770368]
This paper identifies and addresses a serious design bias of existing salient object detection (SOD) datasets.
This design bias has led to a saturation in performance for state-of-the-art SOD models when evaluated on existing datasets.
We propose a new high-quality dataset and update the previous saliency benchmark.
arXiv Detail & Related papers (2021-05-07T03:49:26Z) - A Pipeline for Vision-Based On-Orbit Proximity Operations Using Deep
Learning and Synthetic Imagery [0.0]
Two key challenges currently pose a major barrier to the use of deep learning for vision-based on-orbit proximity operations.
A scarcity of labeled training data (images of a target spacecraft) hinders creation of robust deep learning models.
This paper presents an open-source deep learning pipeline, developed specifically for on-orbit visual navigation applications.
arXiv Detail & Related papers (2021-01-14T15:17:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.