Investigating the Effect of Sensor Modalities in Multi-Sensor
Detection-Prediction Models
- URL: http://arxiv.org/abs/2101.03279v1
- Date: Sat, 9 Jan 2021 03:21:36 GMT
- Title: Investigating the Effect of Sensor Modalities in Multi-Sensor
Detection-Prediction Models
- Authors: Abhishek Mohta, Fang-Chieh Chou, Brian C. Becker, Carlos
Vallespi-Gonzalez, Nemanja Djuric
- Abstract summary: We focus on the contribution of sensor modalities towards the model performance.
In addition, we investigate the use of sensor dropout to mitigate the above-mentioned issues.
- Score: 8.354898936252516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detection of surrounding objects and their motion prediction are critical
components of a self-driving system. Recently proposed models that jointly
address these tasks rely on a number of sensors to achieve state-of-the-art
performance. However, this increases system complexity and may result in a
brittle model that overfits to any single sensor modality while ignoring
others, leading to reduced generalization. We focus on this important problem
and analyze the contribution of sensor modalities towards the model
performance. In addition, we investigate the use of sensor dropout to mitigate
the above-mentioned issues, leading to a more robust, better-performing model
on real-world driving data.
Related papers
- MSSIDD: A Benchmark for Multi-Sensor Denoising [55.41612200877861]
We introduce a new benchmark, the Multi-Sensor SIDD dataset, which is the first raw-domain dataset designed to evaluate the sensor transferability of denoising models.
We propose a sensor consistency training framework that enables denoising models to learn the sensor-invariant features.
arXiv Detail & Related papers (2024-11-18T13:32:59Z) - Condition-Aware Multimodal Fusion for Robust Semantic Perception of Driving Scenes [56.52618054240197]
We propose a novel, condition-aware multimodal fusion approach for robust semantic perception of driving scenes.
Our method, CAFuser, uses an RGB camera input to classify environmental conditions and generate a Condition Token that guides the fusion of multiple sensor modalities.
We set the new state of the art with CAFuser on the MUSES dataset with 59.7 PQ for multimodal panoptic segmentation and 78.2 mIoU for semantic segmentation, ranking first on the public benchmarks.
arXiv Detail & Related papers (2024-10-14T17:56:20Z) - Increasing the Robustness of Model Predictions to Missing Sensors in Earth Observation [5.143097874851516]
We study two novel methods tailored for multi-sensor scenarios, namely Input Sensor Dropout (ISensD) and Ensemble Sensor Invariant (ESensI)
We demonstrate that these methods effectively increase the robustness of model predictions to missing sensors.
We observe that ensemble multi-sensor models are the most robust to the lack of sensors.
arXiv Detail & Related papers (2024-07-22T09:58:29Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Data-Based Design of Multi-Model Inferential Sensors [0.0]
The nonlinear character of industrial processes is usually the main limitation to designing simple linear inferential sensors.
We propose two novel approaches for the design of multi-model inferential sensors.
The results show substantial improvements over the state-of-the-art design techniques for single-/multi-model inferential sensors.
arXiv Detail & Related papers (2023-08-05T12:55:15Z) - A Real-time Human Pose Estimation Approach for Optimal Sensor Placement
in Sensor-based Human Activity Recognition [63.26015736148707]
This paper introduces a novel methodology to resolve the issue of optimal sensor placement for Human Activity Recognition.
The derived skeleton data provides a unique strategy for identifying the optimal sensor location.
Our findings indicate that the vision-based method for sensor placement offers comparable results to the conventional deep learning approach.
arXiv Detail & Related papers (2023-07-06T10:38:14Z) - Detection of Sensor-To-Sensor Variations using Explainable AI [2.2956649873563952]
chemi-resistive gas sensing devices are plagued by issues of sensor variations during manufacturing.
This study proposes a novel approach for detecting sensor-to-sensor variations in sensing devices using the explainable AI (XAI) method of SHapley Additive exPlanations (SHAP)
The methodology is tested using artificial and realistic Ozone concentration profiles to train a Gated Recurrent Unit (GRU) model.
arXiv Detail & Related papers (2023-06-19T11:00:54Z) - Anomaly Detection and Inter-Sensor Transfer Learning on Smart
Manufacturing Datasets [6.114996271792091]
In many cases, the goal of the smart manufacturing system is to rapidly detect (or anticipate) failures to reduce operational cost and eliminate downtime.
This often boils down to detecting anomalies within the sensor date acquired from the system.
The smart manufacturing application domain poses certain salient technical challenges.
We show that predictive failure classification can be achieved, thus paving the way for predictive maintenance.
arXiv Detail & Related papers (2022-06-13T17:51:24Z) - Assessing Machine Learning Approaches to Address IoT Sensor Drift [0.15229257192293197]
We study and test several approaches with regard to their ability to cope with and adapt to sensor drift under realistic conditions.
Most of these approaches are recent and thus are representative of the current state-of-the-art.
The results show substantial drops in sensing performance due to sensor drift in spite of the approaches.
arXiv Detail & Related papers (2021-09-02T19:15:31Z) - Bandit Quickest Changepoint Detection [55.855465482260165]
Continuous monitoring of every sensor can be expensive due to resource constraints.
We derive an information-theoretic lower bound on the detection delay for a general class of finitely parameterized probability distributions.
We propose a computationally efficient online sensing scheme, which seamlessly balances the need for exploration of different sensing options with exploitation of querying informative actions.
arXiv Detail & Related papers (2021-07-22T07:25:35Z) - Learning Selective Sensor Fusion for States Estimation [47.76590539558037]
We propose SelectFusion, an end-to-end selective sensor fusion module.
During prediction, the network is able to assess the reliability of the latent features from different sensor modalities.
We extensively evaluate all fusion strategies in both public datasets and on progressively degraded datasets.
arXiv Detail & Related papers (2019-12-30T20:25:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.