Sensor Generalization for Adaptive Sensing in Event-based Object Detection via Joint Distribution Training
- URL: http://arxiv.org/abs/2602.23357v1
- Date: Thu, 26 Feb 2026 18:57:52 GMT
- Title: Sensor Generalization for Adaptive Sensing in Event-based Object Detection via Joint Distribution Training
- Authors: Aheli Saha, René Schuster, Didier Stricker,
- Abstract summary: Bio-inspired event cameras have recently attracted significant research due to their asynchronous and low-latency capabilities.<n>There is a gap in the variability of available data and a lack of extensive analysis of the parameters characterizing their signals.<n>This paper addresses these issues by providing readers with an in-depth understanding of how intrinsic parameters affect the performance of a model trained on event data, specifically for object detection.
- Score: 18.51701989107632
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bio-inspired event cameras have recently attracted significant research due to their asynchronous and low-latency capabilities. These features provide a high dynamic range and significantly reduce motion blur. However, because of the novelty in the nature of their output signals, there is a gap in the variability of available data and a lack of extensive analysis of the parameters characterizing their signals. This paper addresses these issues by providing readers with an in-depth understanding of how intrinsic parameters affect the performance of a model trained on event data, specifically for object detection. We also use our findings to expand the capabilities of the downstream model towards sensor-agnostic robustness.
Related papers
- Feature-Aware Test Generation for Deep Learning Models [0.5368630420272898]
We introduce Detect, a feature-aware test generation framework for vision-based deep learning (DL) models.<n>It generates inputs by perturbing disentangled semantic attributes within the latent space.<n>It identifies which features lead to behavior shifts and uses a vision-language model for semantic attribution.
arXiv Detail & Related papers (2026-01-20T15:41:06Z) - Continual Adaptation: Environment-Conditional Parameter Generation for Object Detection in Dynamic Scenarios [54.58186816693791]
environments constantly change over time and space, posing significant challenges for object detectors trained based on a closed-set assumption.<n>We propose a new mechanism, converting the fine-tuning process to a specific- parameter generation.<n>In particular, we first design a dual-path LoRA-based domain-aware adapter that disentangles features into domain-invariant and domain-specific components.
arXiv Detail & Related papers (2025-06-30T17:14:12Z) - MATE: Motion-Augmented Temporal Consistency for Event-based Point Tracking [58.719310295870024]
This paper presents an event-based framework for tracking any point.<n>To resolve ambiguities caused by event sparsity, a motion-guidance module incorporates kinematic vectors into the local matching process.<n>The method improves the $Survival_50$ metric by 17.9% over event-only tracking of any point baseline.
arXiv Detail & Related papers (2024-12-02T09:13:29Z) - Localized Gaussians as Self-Attention Weights for Point Clouds Correspondence [92.07601770031236]
We investigate semantically meaningful patterns in the attention heads of an encoder-only Transformer architecture.<n>We find that fixing the attention weights not only accelerates the training process but also enhances the stability of the optimization.
arXiv Detail & Related papers (2024-09-20T07:41:47Z) - Increasing the Robustness of Model Predictions to Missing Sensors in Earth Observation [5.143097874851516]
We study two novel methods tailored for multi-sensor scenarios, namely Input Sensor Dropout (ISensD) and Ensemble Sensor Invariant (ESensI)
We demonstrate that these methods effectively increase the robustness of model predictions to missing sensors.
We observe that ensemble multi-sensor models are the most robust to the lack of sensors.
arXiv Detail & Related papers (2024-07-22T09:58:29Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - DetDiffusion: Synergizing Generative and Perceptive Models for Enhanced Data Generation and Perception [78.26734070960886]
Current perceptive models heavily depend on resource-intensive datasets.
We introduce perception-aware loss (P.A. loss) through segmentation, improving both quality and controllability.
Our method customizes data augmentation by extracting and utilizing perception-aware attribute (P.A. Attr) during generation.
arXiv Detail & Related papers (2024-03-20T04:58:03Z) - Innovative Horizons in Aerial Imagery: LSKNet Meets DiffusionDet for
Advanced Object Detection [55.2480439325792]
We present an in-depth evaluation of an object detection model that integrates the LSKNet backbone with the DiffusionDet head.
The proposed model achieves a mean average precision (MAP) of approximately 45.7%, which is a significant improvement.
This advancement underscores the effectiveness of the proposed modifications and sets a new benchmark in aerial image analysis.
arXiv Detail & Related papers (2023-11-21T19:49:13Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - Automated Mobility Context Detection with Inertial Signals [7.71058263701836]
The primary goal of this paper is the investigation of context detection for remote monitoring of daily motor functions.
We aim to understand whether inertial signals sampled with wearable accelerometers, provide reliable information to classify gait-related activities as either indoor or outdoor.
arXiv Detail & Related papers (2022-05-16T09:34:43Z) - Description of Structural Biases and Associated Data in Sensor-Rich
Environments [6.548580592686077]
We study activity recognition in the context of sensor-rich environments.
We address the problem of inductive biases and their impact on the data collection process.
We propose a metamodeling process in which the sensor data is structured in layers.
arXiv Detail & Related papers (2021-04-11T00:26:59Z) - Human Activity Recognition from Wearable Sensor Data Using
Self-Attention [2.9023633922848586]
We present a self-attention based neural network model for activity recognition from body-worn sensor data.
We performed experiments on four popular publicly available HAR datasets: PAMAP2, Opportunity, Skoda and USC-HAD.
Our model achieve significant performance improvement over recent state-of-the-art models in both benchmark test subjects and Leave-one-out-subject evaluation.
arXiv Detail & Related papers (2020-03-17T14:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.