Vision At Night: Exploring Biologically Inspired Preprocessing For Improved Robustness Via Color And Contrast Transformations
- URL: http://arxiv.org/abs/2509.24863v1
- Date: Mon, 29 Sep 2025 14:48:32 GMT
- Title: Vision At Night: Exploring Biologically Inspired Preprocessing For Improved Robustness Via Color And Contrast Transformations
- Authors: Lorena Stracke, Lia Nimmermann, Shashank Agnihotri, Margret Keuper, Volker Blanz,
- Abstract summary: We explore biologically motivated input preprocessing for robust semantic segmentation.<n>By applying Difference-of-Gaussians (DoG) filtering to RGB, grayscale, and opponent-color channels, we enhance local contrast without modifying model architecture or training.<n>We show that such preprocessing maintains in-distribution performance while improving to adverse conditions like night, fog, and snow.
- Score: 18.437759539809175
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inspired by the human visual system's mechanisms for contrast enhancement and color-opponency, we explore biologically motivated input preprocessing for robust semantic segmentation. By applying Difference-of-Gaussians (DoG) filtering to RGB, grayscale, and opponent-color channels, we enhance local contrast without modifying model architecture or training. Evaluations on Cityscapes, ACDC, and Dark Zurich show that such preprocessing maintains in-distribution performance while improving robustness to adverse conditions like night, fog, and snow. As this processing is model-agnostic and lightweight, it holds potential for integration into imaging pipelines, enabling imaging systems to deliver task-ready, robust inputs for downstream vision models in safety-critical environments.
Related papers
- Adversarial Patch Generation for Visual-Infrared Dense Prediction Tasks via Joint Position-Color Optimization [14.358458317718174]
We propose a joint position-color optimization framework (AP-PCO) for generating adversarial patches in visual-infrared settings.<n>We introduce a crossmodal color adaptation strategy that constrains patch appearance according to infrared grayscale characteristics.<n> experiments on visual-infrared dense prediction tasks demonstrate that the proposed AP-PCO achieves consistently strong attack performance.
arXiv Detail & Related papers (2026-02-27T19:26:17Z) - Optimization and Mobile Deployment for Anthropocene Neural Style Transfer [0.3867363075280543]
AnthropoCam is a mobile-based neural style transfer system optimized for the visual synthesis of Anthropocene environments.<n>System integrates a React Native with a Flask-based GPU backend, achieving high-resolution inference within 3-5 seconds on general mobile hardware.
arXiv Detail & Related papers (2026-01-29T00:50:03Z) - DeshadowMamba: Deshadowing as 1D Sequential Similarity [85.07259906446588]
We introduce Mamba, a selective state space model that propagates global context through directional state transitions.<n>Despite its potential, directly applying Mamba to image data is suboptimal, since it lacks awareness of shadow-non-shadow semantics.<n>We propose CrossGate, a directional modulation mechanism that injects shadow-aware similarity into Mamba's input gate.<n>To further ensure appearance fidelity, we introduce ColorShift regularization, a contrastive learning objective driven by global color statistics.
arXiv Detail & Related papers (2025-10-28T10:14:23Z) - Enhancing Infrared Vision: Progressive Prompt Fusion Network and Benchmark [58.61079960074608]
Existing infrared image enhancement methods focus on tackling individual degradations.<n>All-in-one enhancement methods, commonly applied to RGB sensors, often demonstrate limited effectiveness.
arXiv Detail & Related papers (2025-10-10T12:55:54Z) - DACA-Net: A Degradation-Aware Conditional Diffusion Network for Underwater Image Enhancement [16.719513778795367]
Underwater images typically suffer from severe colour distortions, low visibility, and reduced structural clarity due to complex optical effects such as scattering and absorption.<n>Existing enhancement methods often struggle to adaptively handle diverse degradation conditions and fail to leverage underwater-specific physical priors effectively.<n>We propose a degradation-aware conditional diffusion model to enhance underwater images adaptively and robustly.
arXiv Detail & Related papers (2025-07-30T09:16:07Z) - SpikeGen: Decoupled "Rods and Cones" Visual Representation Processing with Latent Generative Framework [53.27177454390712]
This study seeks to emulate the human visual system by integrating multi-modal visual inputs with modern latent-space generative frameworks.<n>We name it SpikeGen. We evaluate its performance across various spike-RGB tasks, including conditional image and video deblurring, dense frame reconstruction from spike streams, and high-speed scene novel-view synthesis.
arXiv Detail & Related papers (2025-05-23T15:54:11Z) - DifIISR: A Diffusion Model with Gradient Guidance for Infrared Image Super-Resolution [32.53713932204663]
DifIISR is an infrared image super-resolution diffusion model optimized for visual quality and perceptual performance.<n>We introduce an infrared thermal spectrum distribution regulation to preserve visual fidelity.<n>We incorporate various visual foundational models as the perceptual guidance for downstream visual tasks.
arXiv Detail & Related papers (2025-03-03T05:20:57Z) - Oscillation Inversion: Understand the structure of Large Flow Model through the Lens of Inversion Method [60.88467353578118]
We show that a fixed-point-inspired iterative approach to invert real-world images does not achieve convergence, instead oscillating between distinct clusters.
We introduce a simple and fast distribution transfer technique that facilitates image enhancement, stroke-based recoloring, as well as visual prompt-guided image editing.
arXiv Detail & Related papers (2024-11-17T17:45:37Z) - Dual High-Order Total Variation Model for Underwater Image Restoration [13.789310785350484]
Underwater image enhancement and restoration (UIER) is one crucial mode to improve the visual quality of underwater images.
We propose an effective variational framework based on an extended underwater image formation model (UIFM)
In our proposed framework, the weight factors-based color compensation is combined with the color balance to compensate for the attenuated color channels and remove the color cast.
arXiv Detail & Related papers (2024-07-20T13:06:37Z) - LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models [54.93010869546011]
We propose to leverage the pre-trained latent diffusion model to perform the neural ISP for enhancing extremely low-light images.<n>Specifically, to tailor the pre-trained latent diffusion model to operate on the RAW domain, we train a set of lightweight taming modules.<n>We observe different roles of UNet denoising and decoder reconstruction in the latent diffusion model, which inspires us to decompose the low-light image enhancement task into latent-space low-frequency content generation and decoding-phase high-frequency detail maintenance.
arXiv Detail & Related papers (2023-12-02T04:31:51Z) - Enhancing Underwater Image via Adaptive Color and Contrast Enhancement,
and Denoising [2.298932494750101]
We propose an adaptive color and contrast enhancement, and denoising (ACCE-D) framework for underwater image enhancement.
We derive a numerical solution for ACCE, and adopt a pyramid-based strategy to accelerate the solving procedure.
Experimental results demonstrate that our strategy is effective in color correction, visibility improvement, and detail revealing.
arXiv Detail & Related papers (2021-04-02T14:37:20Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - DWDN: Deep Wiener Deconvolution Network for Non-Blind Image Deblurring [66.91879314310842]
We propose an explicit deconvolution process in a feature space by integrating a classical Wiener deconvolution framework with learned deep features.
A multi-scale cascaded feature refinement module then predicts the deblurred image from the deconvolved deep features.
We show that the proposed deep Wiener deconvolution network facilitates deblurred results with visibly fewer artifacts and quantitatively outperforms state-of-the-art non-blind image deblurring methods by a wide margin.
arXiv Detail & Related papers (2021-03-18T00:38:11Z) - Underwater Image Color Correction by Complementary Adaptation [0.0]
We propose a novel approach for underwater image color correction based on a Tikhonov type optimization model in the CIELAB color space.
Understood as a long-term adaptive process, our method effectively removes the underwater color cast and yields a balanced color distribution.
arXiv Detail & Related papers (2020-10-21T03:59:22Z) - Creating Artificial Modalities to Solve RGB Liveness [79.9255035557979]
We introduce two types of artificial transforms: rank pooling and optical flow, combined in end-to-end pipeline for spoof detection.
The proposed method achieves state-of-the-art on the largest cross-ethnicity face anti-spoofing dataset CASIA-SURF CeFA (RGB)
arXiv Detail & Related papers (2020-06-29T13:19:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.