MODISSA: a multipurpose platform for the prototypical realization of
vehicle-related applications using optical sensors
- URL: http://arxiv.org/abs/2105.13580v1
- Date: Fri, 28 May 2021 04:21:39 GMT
- Title: MODISSA: a multipurpose platform for the prototypical realization of
vehicle-related applications using optical sensors
- Authors: Bj\"orn Borgmann (1 and 2), Volker Schatz (1), Marcus Hammer (1),
Marcus Hebel (1), Michael Arens (1), Uwe Stilla (2) ((1) Fraunhofer IOSB,
Ettlingen, Germany, (2) Technical University of Munich (TUM), Munich,
Germany)
- Abstract summary: We present the current state of development of the sensor-equipped car MODISSA.
We give a deeper insight into experiments with its specific configuration in the scope of three different applications.
Other research groups can benefit from these experiences when setting up their own mobile sensor system.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present the current state of development of the sensor-equipped car
MODISSA, with which Fraunhofer IOSB realizes a configurable experimental
platform for hardware evaluation and software development in the context of
mobile mapping and vehicle-related safety and protection. MODISSA is based on a
van that has successively been equipped with a variety of optical sensors over
the past few years, and contains hardware for complete raw data acquisition,
georeferencing, real-time data analysis, and immediate visualization on in-car
displays. We demonstrate the capabilities of MODISSA by giving a deeper insight
into experiments with its specific configuration in the scope of three
different applications. Other research groups can benefit from these
experiences when setting up their own mobile sensor system, especially
regarding the selection of hardware and software, the knowledge of possible
sources of error, and the handling of the acquired sensor data.
Related papers
- SmartPNT-MSF: A Multi-Sensor Fusion Dataset for Positioning and Navigation Research [5.758433879018026]
This dataset integrates data from multiple sensors, including Global Navigation Satellite Systems (GNSS), Inertial Measurement Units (IMU), optical cameras, and LiDAR.<n>A standardized framework for data collection and processing ensures consistency and scalability, enabling large-scale analysis.<n>Covers a wide range of real-world scenarios, including urban areas, campuses, tunnels, and suburban environments.
arXiv Detail & Related papers (2025-07-25T09:06:11Z) - SensorLM: Learning the Language of Wearable Sensors [50.95988682423808]
We present SensorLM, a family of sensor-language foundation models that enable wearable sensor data understanding with natural language.<n>We introduce a hierarchical caption generation pipeline designed to capture statistical, structural, and semantic information from sensor data.<n>This approach enabled the curation of the largest sensor-language dataset to date, comprising over 59.7 million hours of data from more than 103,000 people.
arXiv Detail & Related papers (2025-06-10T17:13:09Z) - Graph-Based Multi-Modal Sensor Fusion for Autonomous Driving [3.770103075126785]
We introduce a novel approach to multi-modal sensor fusion, focusing on developing a graph-based state representation.
We present a Sensor-Agnostic Graph-Aware Kalman Filter, the first online state estimation technique designed to fuse multi-modal graphs.
We validate the effectiveness of our proposed framework through extensive experiments conducted on both synthetic and real-world driving datasets.
arXiv Detail & Related papers (2024-11-06T06:58:17Z) - Multiple and Gyro-Free Inertial Datasets [1.989354417511267]
An inertial navigation system (INS) utilizes three accelerometers and gyroscopes to determine platform position, velocity, and orientation.
There are countless applications for INS, including robotics, autonomous platforms, and the internet of things.
No datasets are available for gyro-free INS (GFINS) and multiple inertial measurement unit (MIMU) architectures.
This dataset contains 35 hours of inertial data and corresponding ground truth trajectories.
arXiv Detail & Related papers (2024-03-21T17:36:53Z) - Advancing Location-Invariant and Device-Agnostic Motion Activity
Recognition on Wearable Devices [6.557453686071467]
We conduct a comprehensive evaluation of the generalizability of motion models across sensor locations.
Our analysis highlights this challenge and identifies key on-body locations for building location-invariant models.
We present deployable on-device motion models reaching 91.41% frame-level F1-score from a single model irrespective of sensor placements.
arXiv Detail & Related papers (2024-02-06T05:10:00Z) - Federated Learning on Edge Sensing Devices: A Review [0.0]
Federated Learning (FL) is emerging as a solution to privacy, hardware, and connectivity limitations.
We focus on the key FL principles, software frameworks, and testbeds.
We also explore the current sensor technologies, properties of the sensing devices and sensing applications where FL is utilized.
arXiv Detail & Related papers (2023-11-02T12:55:26Z) - Framework for Quality Evaluation of Smart Roadside Infrastructure
Sensors for Automated Driving Applications [2.0502751783060003]
We present a novel approach to perform detailed quality assessment for smart roadside infrastructure sensors.
Our framework is multimodal across different sensor types and is evaluated on the DAIR-V2X dataset.
arXiv Detail & Related papers (2023-04-16T10:21:07Z) - Berlin V2X: A Machine Learning Dataset from Multiple Vehicles and Radio
Access Technologies [56.77079930521082]
We have conducted a detailed measurement campaign that paves the way to a plethora of diverse ML-based studies.
The resulting datasets offer GPS-located wireless measurements across diverse urban environments for both cellular (with two different operators) and sidelink radio access technologies.
We provide an initial analysis of the data showing some of the challenges that ML needs to overcome and the features that ML can leverage.
arXiv Detail & Related papers (2022-12-20T15:26:39Z) - HUM3DIL: Semi-supervised Multi-modal 3D Human Pose Estimation for
Autonomous Driving [95.42203932627102]
3D human pose estimation is an emerging technology, which can enable the autonomous vehicle to perceive and understand the subtle and complex behaviors of pedestrians.
Our method efficiently makes use of these complementary signals, in a semi-supervised fashion and outperforms existing methods with a large margin.
Specifically, we embed LiDAR points into pixel-aligned multi-modal features, which we pass through a sequence of Transformer refinement stages.
arXiv Detail & Related papers (2022-12-15T11:15:14Z) - VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and
Policy Learning for Autonomous Vehicles [131.2240621036954]
We present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles.
Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras.
We demonstrate the ability to train and test perception-to-control policies across each of the sensor types and showcase the power of this approach via deployment on a full scale autonomous vehicle.
arXiv Detail & Related papers (2021-11-23T18:58:10Z) - Domain and Modality Gaps for LiDAR-based Person Detection on Mobile
Robots [91.01747068273666]
This paper studies existing LiDAR-based person detectors with a particular focus on mobile robot scenarios.
Experiments revolve around the domain gap between driving and mobile robot scenarios, as well as the modality gap between 3D and 2D LiDAR sensors.
Results provide practical insights into LiDAR-based person detection and facilitate informed decisions for relevant mobile robot designs and applications.
arXiv Detail & Related papers (2021-06-21T16:35:49Z) - SensiX: A Platform for Collaborative Machine Learning on the Edge [69.1412199244903]
We present SensiX, a personal edge platform that stays between sensor data and sensing models.
We demonstrate its efficacy in developing motion and audio-based multi-device sensing systems.
Our evaluation shows that SensiX offers a 7-13% increase in overall accuracy and up to 30% increase across different environment dynamics at the expense of 3mW power overhead.
arXiv Detail & Related papers (2020-12-04T23:06:56Z) - Learning Selective Sensor Fusion for States Estimation [47.76590539558037]
We propose SelectFusion, an end-to-end selective sensor fusion module.
During prediction, the network is able to assess the reliability of the latent features from different sensor modalities.
We extensively evaluate all fusion strategies in both public datasets and on progressively degraded datasets.
arXiv Detail & Related papers (2019-12-30T20:25:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.