EREBUS: End-to-end Robust Event Based Underwater Simulation
- URL: http://arxiv.org/abs/2511.01381v1
- Date: Mon, 03 Nov 2025 09:28:48 GMT
- Title: EREBUS: End-to-end Robust Event Based Underwater Simulation
- Authors: Hitesh Kyatham, Arjun Suresh, Aadi Palnitkar, Yiannis Aloimonos,
- Abstract summary: We introduce a pipeline which can be used to generate realistic synthetic data of an event-based camera mounted to an AUV.<n>We demonstrate the effectiveness of our pipeline using the task of rock detection with poor visibility and suspended particulate matter.
- Score: 12.65103321991945
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The underwater domain presents a vast array of challenges for roboticists and computer vision researchers alike, such as poor lighting conditions and high dynamic range scenes. In these adverse conditions, traditional vision techniques struggle to adapt and lead to suboptimal performance. Event-based cameras present an attractive solution to this problem, mitigating the issues of traditional cameras by tracking changes in the footage on a frame-by-frame basis. In this paper, we introduce a pipeline which can be used to generate realistic synthetic data of an event-based camera mounted to an AUV (Autonomous Underwater Vehicle) in an underwater environment for training vision models. We demonstrate the effectiveness of our pipeline using the task of rock detection with poor visibility and suspended particulate matter, but the approach can be generalized to other underwater tasks.
Related papers
- UEOF: A Benchmark Dataset for Underwater Event-Based Optical Flow [6.553956273453576]
We introduce the first synthetic underwater benchmark dataset for event-based optical flow derived from physically-based ray-traced RGBD sequences.<n>We benchmark state-of-the-art learning-based and model-based optical flow prediction methods to understand how underwater light transport affects event formation and motion estimation accuracy.<n>Our dataset establishes a new baseline for future development and evaluation of underwater event-based perception algorithms.
arXiv Detail & Related papers (2026-01-15T04:10:14Z) - Expose Camouflage in the Water: Underwater Camouflaged Instance Segmentation and Dataset [76.92197418745822]
camouflaged instance segmentation (CIS) faces greater challenges in accurately segmenting objects that blend closely with their surroundings.<n>Traditional camouflaged instance segmentation methods, trained on terrestrial-dominated datasets with limited underwater samples, may exhibit inadequate performance in underwater scenes.<n>We introduce the first underwater camouflaged instance segmentation dataset, UCIS4K, which comprises 3,953 images of camouflaged marine organisms with instance-level annotations.
arXiv Detail & Related papers (2025-10-20T14:34:51Z) - Learning Underwater Active Perception in Simulation [51.205673783866146]
Turbidity can jeopardise the whole mission as it may prevent correct visual documentation of the inspected structures.<n>Previous works have introduced methods to adapt to turbidity and backscattering.<n>We propose a simple yet efficient approach to enable high-quality image acquisition of assets in a broad range of water conditions.
arXiv Detail & Related papers (2025-04-23T06:48:38Z) - IBURD: Image Blending for Underwater Robotic Detection [17.217395753087157]
IBURD generates both images of underwater debris and their pixel-level annotations.<n>IBURD is able to robustly blend transparent objects into arbitrary backgrounds.
arXiv Detail & Related papers (2025-02-24T22:56:49Z) - Real-Time Multi-Scene Visibility Enhancement for Promoting Navigational Safety of Vessels Under Complex Weather Conditions [48.529493393948435]
The visible-light camera has emerged as an essential imaging sensor for marine surface vessels in intelligent waterborne transportation systems.
The visual imaging quality inevitably suffers from several kinds of degradations under complex weather conditions.
We develop a general-purpose multi-scene visibility enhancement method to restore degraded images captured under different weather conditions.
arXiv Detail & Related papers (2024-09-02T23:46:27Z) - LU2Net: A Lightweight Network for Real-time Underwater Image Enhancement [4.353142366661057]
Lightweight Underwater Unet (LU2Net) is a novel U-shape network designed specifically for real-time enhancement of underwater images.
LU2Net is capable of providing well-enhanced underwater images at a speed 8 times faster than the current state-of-the-art underwater image enhancement method.
arXiv Detail & Related papers (2024-06-21T08:33:13Z) - An Efficient Detection and Control System for Underwater Docking using
Machine Learning and Realistic Simulation: A Comprehensive Approach [5.039813366558306]
This work compares different deep-learning architectures to perform underwater docking detection and classification.
A Generative Adversarial Network (GAN) is used to do image-to-image translation, converting the Gazebo simulation image into an underwater-looking image.
Results show an improvement of 20% in the high turbidity scenarios regardless of the underwater currents.
arXiv Detail & Related papers (2023-11-02T18:10:20Z) - On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing [69.34740063574921]
This paper presents a methodology for generating event-based vision datasets from optimal landing trajectories.
We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility.
We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories.
arXiv Detail & Related papers (2023-08-01T09:14:20Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - Video Waterdrop Removal via Spatio-Temporal Fusion in Driving Scenes [53.16726447796844]
The waterdrops on windshields during driving can cause severe visual obstructions, which may lead to car accidents.
We propose an attention-based framework that fuses the representations from multiple frames to restore visual information occluded by waterdrops.
arXiv Detail & Related papers (2023-02-12T13:47:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.