BELE: Blur Equivalent Linearized Estimator
- URL: http://arxiv.org/abs/2503.00503v1
- Date: Sat, 01 Mar 2025 14:19:08 GMT
- Title: BELE: Blur Equivalent Linearized Estimator
- Authors: Paolo Giannitrapani, Elio D. Di Claudio, Giovanni Jacovitti,
- Abstract summary: This paper introduces a novel parametric model that separates perceptual effects due to strong edge degradations from those caused by texture distortions.<n>The first is the Blur Equivalent Linearized Estimator, designed to measure blur on strong and isolated edges.<n>The second is a Complex Peak Signal-to-Noise Ratio, which evaluates distortions affecting texture regions.
- Score: 0.8192907805418581
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the Full-Reference Image Quality Assessment context, Mean Opinion Score values represent subjective evaluations based on retinal perception, while objective metrics assess the reproduced image on the display. Bridging these subjective and objective domains requires parametric mapping functions, which are sensitive to the observer's viewing distance. This paper introduces a novel parametric model that separates perceptual effects due to strong edge degradations from those caused by texture distortions. These effects are quantified using two distinct quality indices. The first is the Blur Equivalent Linearized Estimator, designed to measure blur on strong and isolated edges while accounting for variations in viewing distance. The second is a Complex Peak Signal-to-Noise Ratio, which evaluates distortions affecting texture regions. The first-order effects of the estimator are directly tied to the first index, for which we introduce the concept of \emph{focalization}, interpreted as a linearization term. Starting from a Positional Fisher Information loss model applied to Gaussian blur distortion in natural images, we demonstrate how this model can generalize to linearize all types of distortions. Finally, we validate our theoretical findings by comparing them with several state-of-the-art classical and deep-learning-based full-reference image quality assessment methods on widely used benchmark datasets.
Related papers
- Examining the Impact of Optical Aberrations to Image Classification and Object Detection Models [58.98742597810023]
Vision models have to behave in a robust way to disturbances such as noise or blur.
This paper studies two datasets of blur corruptions, which we denote OpticsBench and LensCorruptions.
Evaluations for image classification and object detection on ImageNet and MSCOCO show that for a variety of different pre-trained models, the performance on OpticsBench and LensCorruptions varies significantly.
arXiv Detail & Related papers (2025-04-25T17:23:47Z) - A Meaningful Perturbation Metric for Evaluating Explainability Methods [55.09730499143998]
We introduce a novel approach, which harnesses image generation models to perform targeted perturbation.
Specifically, we focus on inpainting only the high-relevance pixels of an input image to modify the model's predictions while preserving image fidelity.
This is in contrast to existing approaches, which often produce out-of-distribution modifications, leading to unreliable results.
arXiv Detail & Related papers (2025-04-09T11:46:41Z) - PIGUIQA: A Physical Imaging Guided Perceptual Framework for Underwater Image Quality Assessment [59.9103803198087]
We propose a Physical Imaging Guided perceptual framework for Underwater Image Quality Assessment (UIQA)
By leveraging underwater radiative transfer theory, we integrate physics-based imaging estimations to establish quantitative metrics for these distortions.
The proposed model accurately predicts image quality scores and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-12-20T03:31:45Z) - DiffSim: Taming Diffusion Models for Evaluating Visual Similarity [19.989551230170584]
This paper introduces the DiffSim method to measure visual similarity in generative models.<n>By aligning features in the attention layers of the denoising U-Net, DiffSim evaluates both appearance and style similarity.<n>We also introduce the Sref and IP benchmarks to evaluate visual similarity at the level of style and instance.
arXiv Detail & Related papers (2024-12-19T07:00:03Z) - Benchmark Generation Framework with Customizable Distortions for Image
Classifier Robustness [4.339574774938128]
We present a novel framework for generating adversarial benchmarks to evaluate the robustness of image classification models.
Our framework allows users to customize the types of distortions to be optimally applied to images, which helps address the specific distortions relevant to their deployment.
arXiv Detail & Related papers (2023-10-28T07:40:42Z) - DeepDC: Deep Distance Correlation as a Perceptual Image Quality
Evaluator [53.57431705309919]
ImageNet pre-trained deep neural networks (DNNs) show notable transferability for building effective image quality assessment (IQA) models.
We develop a novel full-reference IQA (FR-IQA) model based exclusively on pre-trained DNN features.
We conduct comprehensive experiments to demonstrate the superiority of the proposed quality model on five standard IQA datasets.
arXiv Detail & Related papers (2022-11-09T14:57:27Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Doubly Reparameterized Importance Weighted Structure Learning for Scene
Graph Generation [40.46394569128303]
Scene graph generation, given an input image, aims to explicitly model objects and their relationships by constructing a visually-grounded scene graph.
We propose a novel doubly re parameterized importance weighted structure learning method, which employs a tighter importance weighted lower bound as the variational inference objective.
The proposed method achieves the state-of-the-art performance on various popular scene graph generation benchmarks.
arXiv Detail & Related papers (2022-06-22T20:00:25Z) - Treatment Learning Causal Transformer for Noisy Image Classification [62.639851972495094]
In this work, we incorporate this binary information of "existence of noise" as treatment into image classification tasks to improve prediction accuracy.
Motivated from causal variational inference, we propose a transformer-based architecture, that uses a latent generative model to estimate robust feature representations for noise image classification.
We also create new noisy image datasets incorporating a wide range of noise factors for performance benchmarking.
arXiv Detail & Related papers (2022-03-29T13:07:53Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - SIR: Self-supervised Image Rectification via Seeing the Same Scene from
Multiple Different Lenses [82.56853587380168]
We propose a novel self-supervised image rectification (SIR) method based on an important insight that the rectified results of distorted images of the same scene from different lens should be the same.
We leverage a differentiable warping module to generate the rectified images and re-distorted images from the distortion parameters.
Our method achieves comparable or even better performance than the supervised baseline method and representative state-of-the-art methods.
arXiv Detail & Related papers (2020-11-30T08:23:25Z) - Feature-metric Loss for Self-supervised Learning of Depth and Egomotion [13.995413542601472]
Photometric loss is widely used for self-supervised depth and egomotion estimation.
In this work, feature-metric loss is proposed and defined on feature representation.
Comprehensive experiments and detailed analysis via visualization demonstrate the effectiveness of the proposed feature-metric loss.
arXiv Detail & Related papers (2020-07-21T05:19:07Z) - Deep No-reference Tone Mapped Image Quality Assessment [0.0]
Tone mapping introduces distortions in the final image which may lead to visual displeasure.
We introduce a novel no-reference quality assessment technique for these tone mapped images.
We show that the proposed technique delivers competitive performance relative to the state-of-the-art techniques.
arXiv Detail & Related papers (2020-02-08T13:41:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.