Augmented Reality in Service of Human Operations on the Moon: Insights
from a Virtual Testbed
- URL: http://arxiv.org/abs/2303.10686v1
- Date: Sun, 19 Mar 2023 15:32:14 GMT
- Title: Augmented Reality in Service of Human Operations on the Moon: Insights
from a Virtual Testbed
- Authors: Leonie Becker, Tommy Nilsson, Paul Topf Aguiar de Medeiros, Flavie
Rometsch
- Abstract summary: We present findings based on qualitative reflections made by the first 6 study participants.
AR was found instrumental in several use cases, including the support of navigation and risk awareness.
Major design challenges were likewise identified, including the importance of redundancy and contextual appropriateness.
- Score: 1.8638865257327277
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Future astronauts living and working on the Moon will face extreme
environmental conditions impeding their operational safety and performance.
While it has been suggested that Augmented Reality (AR) Head-Up Displays (HUDs)
could potentially help mitigate some of these adversities, the applicability of
AR in the unique lunar context remains underexplored. To address this
limitation, we have produced an accurate representation of the lunar setting in
virtual reality (VR) which then formed our testbed for the exploration of
prospective operational scenarios with aerospace experts. Herein we present
findings based on qualitative reflections made by the first 6 study
participants. AR was found instrumental in several use cases, including the
support of navigation and risk awareness. Major design challenges were likewise
identified, including the importance of redundancy and contextual
appropriateness. Drawing on these findings, we conclude by outlining directions
for future research aimed at developing AR-based assistive solutions tailored
to the lunar setting.
Related papers
- ForesightNav: Learning Scene Imagination for Efficient Exploration [57.49417653636244]
We propose ForesightNav, a novel exploration strategy inspired by human imagination and reasoning.
Our approach equips robotic agents with the capability to predict contextual information, such as occupancy and semantic details, for unexplored regions.
We validate our imagination-based approach using the Structured3D dataset, demonstrating accurate occupancy prediction and superior performance in anticipating unseen scene geometry.
arXiv Detail & Related papers (2025-04-22T17:38:38Z) - Beyond the Destination: A Novel Benchmark for Exploration-Aware Embodied Question Answering [87.76784654371312]
Embodied Question Answering requires agents to dynamically explore 3D environments, actively gather visual information, and perform multi-step reasoning to answer questions.
Existing datasets often introduce biases or prior knowledge, leading to disembodied reasoning.
We construct the largest dataset designed specifically to evaluate both exploration and reasoning capabilities.
arXiv Detail & Related papers (2025-03-14T06:29:47Z) - Psych-Occlusion: Using Visual Psychophysics for Aerial Detection of Occluded Persons during Search and Rescue [41.03292974500013]
Small Unmanned Aerial Systems (sUAS) as "eyes in the sky" during Emergency Response (ER) scenarios.
efficient detection of persons from aerial views plays a crucial role in achieving a successful mission outcome.
Performance of Computer Vision (CV) models onboard sUAS substantially degrades under real-life rigorous conditions.
We exemplify the use of our behavioral dataset, Psych-ER, by using its human accuracy data to adapt the loss function of a detection model.
arXiv Detail & Related papers (2024-12-07T06:22:42Z) - Foundation Models for Remote Sensing and Earth Observation: A Survey [101.77425018347557]
This survey systematically reviews the emerging field of Remote Sensing Foundation Models (RSFMs)
It begins with an outline of their motivation and background, followed by an introduction of their foundational concepts.
We benchmark these models against publicly available datasets, discuss existing challenges, and propose future research directions.
arXiv Detail & Related papers (2024-10-22T01:08:21Z) - Computer vision tasks for intelligent aerospace missions: An overview [10.929595257238548]
Computer vision tasks are crucial for aerospace missions as they help spacecraft to understand and interpret the space environment.
Traditional methods like Kalman Filtering, Structure from Motion, and Multi-View Stereo are not robust enough to handle harsh conditions.
Deep learning (DL)-based perception technologies have shown great potential and outperformed traditional methods.
arXiv Detail & Related papers (2024-07-09T02:50:54Z) - Using Virtual Reality to Shape Humanity's Return to the Moon: Key
Takeaways from a Design Study [1.320520802560207]
This paper explores possible use of Virtual Reality (VR) to simulate analogue studies in lab settings.
We have recreated a prospective lunar operational scenario in VR with a group of astronauts and space experts.
arXiv Detail & Related papers (2023-03-01T17:19:48Z) - SpaceYOLO: A Human-Inspired Model for Real-time, On-board Spacecraft
Feature Detection [0.0]
Real-time, automated spacecraft feature recognition is needed to pinpoint the locations of collision hazards.
New algorithm SpaceYOLO fuses a state-of-the-art object detector YOLOv5 with a separate neural network based on human-inspired decision processes.
Performance in autonomous spacecraft detection of SpaceYOLO is compared to ordinary YOLOv5 in hardware-in-the-loop experiments.
arXiv Detail & Related papers (2023-02-02T02:11:39Z) - Exploring Event Camera-based Odometry for Planetary Robots [39.46226359115717]
Event cameras are poised to become enabling sensors for vision-based exploration on future Mars helicopter missions.
Existing event-based visual-inertial odometry (VIO) algorithms either suffer from high tracking errors or are brittle.
We introduce EKLT-VIO, which addresses both limitations by combining a state-of-the-art event-based with a filter-based backend.
arXiv Detail & Related papers (2022-04-12T15:19:50Z) - Going Deeper into Recognizing Actions in Dark Environments: A
Comprehensive Benchmark Study [35.53075596912581]
Action recognition in dark environments can be applied to fields such as surveillance and autonomous driving at night.
We focus on the task of action recognition in dark environments, which can be applied to fields such as surveillance and autonomous driving at night.
We launch the UG2+ Challenge Track 2 (UG2-2) in IEEE CVPR 2021, with a goal of evaluating and advancing the robustness of AR models in dark environments.
arXiv Detail & Related papers (2022-02-19T07:51:59Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - Batch Exploration with Examples for Scalable Robotic Reinforcement
Learning [63.552788688544254]
Batch Exploration with Examples (BEE) explores relevant regions of the state-space guided by a modest number of human provided images of important states.
BEE is able to tackle challenging vision-based manipulation tasks both in simulation and on a real Franka robot.
arXiv Detail & Related papers (2020-10-22T17:49:25Z) - A survey on applications of augmented, mixed and virtual reality for
nature and environment [114.4879749449579]
Augmented reality (AR), virtual reality (VR) and mixed reality (MR) are technologies of great potential due to the engaging and enriching experiences they are capable of providing.
However, the possibilities that AR, VR and MR offer in the area of environmental applications are not yet widely explored.
We present the outcome of a survey meant to discover and classify existing AR/VR/MR applications that can benefit the environment or increase awareness on environmental issues.
arXiv Detail & Related papers (2020-08-27T09:59:27Z) - Occupancy Anticipation for Efficient Exploration and Navigation [97.17517060585875]
We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions.
By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment.
Our approach is the winning entry in the 2020 Habitat PointNav Challenge.
arXiv Detail & Related papers (2020-08-21T03:16:51Z) - An Exploration of Embodied Visual Exploration [97.21890864063872]
Embodied computer vision considers perception for robots in novel, unstructured environments.
We present a taxonomy for existing visual exploration algorithms and create a standard framework for benchmarking them.
We then perform a thorough empirical study of the four state-of-the-art paradigms using the proposed framework.
arXiv Detail & Related papers (2020-01-07T17:40:32Z) - Counterfactual Vision-and-Language Navigation via Adversarial Path Sampling [65.99956848461915]
Vision-and-Language Navigation (VLN) is a task where agents must decide how to move through a 3D environment to reach a goal.
One of the problems of the VLN task is data scarcity since it is difficult to collect enough navigation paths with human-annotated instructions for interactive environments.
We propose an adversarial-driven counterfactual reasoning model that can consider effective conditions instead of low-quality augmented data.
arXiv Detail & Related papers (2019-11-17T18:02:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.