Dynamic Sensor Matching based on Geomagnetic Inertial Navigation
- URL: http://arxiv.org/abs/2208.06233v2
- Date: Tue, 30 Jan 2024 09:28:35 GMT
- Title: Dynamic Sensor Matching based on Geomagnetic Inertial Navigation
- Authors: Simone M\"uller and Dieter Kranzlm\"uller
- Abstract summary: We present a concept for transferring multi-sensor data into a commonly referenced world coordinate system.
The steady presence of our planetary magnetic field provides a reliable world coordinate system.
Our evaluation reveals the level of quality possible using the earth magnetic field.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Optical sensors can capture dynamic environments and derive depth information
in near real-time. The quality of these digital reconstructions is determined
by factors like illumination, surface and texture conditions, sensing speed and
other sensor characteristics as well as the sensor-object relations.
Improvements can be obtained by using dynamically collected data from multiple
sensors. However, matching the data from multiple sensors requires a shared
world coordinate system. We present a concept for transferring multi-sensor
data into a commonly referenced world coordinate system: the earth's magnetic
field. The steady presence of our planetary magnetic field provides a reliable
world coordinate system, which can serve as a reference for a position-defined
reconstruction of dynamic environments. Our approach is evaluated using
magnetic field sensors of the ZED 2 stereo camera from Stereolabs, which
provides orientation relative to the North Pole similar to a compass. With the
help of inertial measurement unit informations, each camera's position data can
be transferred into the unified world coordinate system. Our evaluation reveals
the level of quality possible using the earth magnetic field and allows a basis
for dynamic and real-time-based applications of optical multi-sensors for
environment detection.
Related papers
- Bridging Remote Sensors with Multisensor Geospatial Foundation Models [15.289711240431107]
msGFM is a multisensor geospatial foundation model that unifies data from four key sensor modalities.
For data originating from identical geolocations, our model employs an innovative cross-sensor pretraining approach.
msGFM has demonstrated enhanced proficiency in a range of both single-sensor and multisensor downstream tasks.
arXiv Detail & Related papers (2024-04-01T17:30:56Z) - Neural Plasticity-Inspired Multimodal Foundation Model for Earth Observation [48.66623377464203]
Our novel approach introduces the Dynamic One-For-All (DOFA) model, leveraging the concept of neural plasticity in brain science.
This dynamic hypernetwork, adjusting to different wavelengths, enables a single versatile Transformer jointly trained on data from five sensors to excel across 12 distinct Earth observation tasks.
arXiv Detail & Related papers (2024-03-22T17:11:47Z) - Automatic Spatial Calibration of Near-Field MIMO Radar With Respect to Optical Sensors [4.328226032204419]
We propose a novel, joint calibration approach for optical RGB-D sensors and MIMO radars that is designed to operate in the radar's near-field range.
Our pipeline consists of a bespoke calibration target, allowing for automatic target detection and localization.
We validate our approach using two different depth sensing technologies from the optical domain.
arXiv Detail & Related papers (2024-03-16T17:24:46Z) - Decisive Data using Multi-Modality Optical Sensors for Advanced
Vehicular Systems [1.3315340349412819]
This paper focuses on various optical technologies for design and development of state-of-the-art out-cabin forward vision systems and in-cabin driver monitoring systems.
The focused optical sensors include Longwave Thermal Imaging (LWIR) cameras, Near Infrared (NIR), Neuromorphic/ event cameras, Visible CMOS cameras and Depth cameras.
arXiv Detail & Related papers (2023-07-25T16:03:47Z) - Data-Induced Interactions of Sparse Sensors [3.050919759387984]
We take a thermodynamic view to compute the full landscape of sensor interactions induced by the training data.
Mapping out these data-induced sensor interactions allows combining them with external selection criteria and anticipating sensor replacement impacts.
arXiv Detail & Related papers (2023-07-21T18:13:37Z) - Multimodal Dataset from Harsh Sub-Terranean Environment with Aerosol
Particles for Frontier Exploration [55.41644538483948]
This paper introduces a multimodal dataset from the harsh and unstructured underground environment with aerosol particles.
It contains synchronized raw data measurements from all onboard sensors in Robot Operating System (ROS) format.
The focus of this paper is not only to capture both temporal and spatial data diversities but also to present the impact of harsh conditions on captured data.
arXiv Detail & Related papers (2023-04-27T20:21:18Z) - The LuViRA Dataset: Synchronized Vision, Radio, and Audio Sensors for Indoor Localization [41.58739817444644]
The dataset includes color images, corresponding depth maps, inertial measurement unit (IMU) readings, channel response between a 5G massive multiple-input and multiple-output (MIMO) testbed and user equipment.
We synchronize these sensors to ensure that all data is recorded simultaneously.
The main aim of this dataset is to enable research on sensor fusion with the most commonly used sensors for localization tasks.
arXiv Detail & Related papers (2023-02-10T15:12:40Z) - Environmental Sensor Placement with Convolutional Gaussian Neural
Processes [65.13973319334625]
It is challenging to place sensors in a way that maximises the informativeness of their measurements, particularly in remote regions like Antarctica.
Probabilistic machine learning models can suggest informative sensor placements by finding sites that maximally reduce prediction uncertainty.
This paper proposes using a convolutional Gaussian neural process (ConvGNP) to address these issues.
arXiv Detail & Related papers (2022-11-18T17:25:14Z) - Learning Online Multi-Sensor Depth Fusion [100.84519175539378]
SenFuNet is a depth fusion approach that learns sensor-specific noise and outlier statistics.
We conduct experiments with various sensor combinations on the real-world CoRBS and Scene3D datasets.
arXiv Detail & Related papers (2022-04-07T10:45:32Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - Learning Selective Sensor Fusion for States Estimation [47.76590539558037]
We propose SelectFusion, an end-to-end selective sensor fusion module.
During prediction, the network is able to assess the reliability of the latent features from different sensor modalities.
We extensively evaluate all fusion strategies in both public datasets and on progressively degraded datasets.
arXiv Detail & Related papers (2019-12-30T20:25:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.