R-LPIPS: An Adversarially Robust Perceptual Similarity Metric
- URL: http://arxiv.org/abs/2307.15157v2
- Date: Mon, 31 Jul 2023 16:06:47 GMT
- Title: R-LPIPS: An Adversarially Robust Perceptual Similarity Metric
- Authors: Sara Ghazanfari, Siddharth Garg, Prashanth Krishnamurthy, Farshad
Khorrami, Alexandre Araujo
- Abstract summary: We propose the Robust Learned Perceptual Image Patch Similarity (R-LPIPS) metric.
R-LPIPS is a new metric that leverages adversarially trained deep features.
We demonstrate the superiority of R-LPIPS compared to the classical LPIPS metric.
- Score: 71.33812578529006
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Similarity metrics have played a significant role in computer vision to
capture the underlying semantics of images. In recent years, advanced
similarity metrics, such as the Learned Perceptual Image Patch Similarity
(LPIPS), have emerged. These metrics leverage deep features extracted from
trained neural networks and have demonstrated a remarkable ability to closely
align with human perception when evaluating relative image similarity. However,
it is now well-known that neural networks are susceptible to adversarial
examples, i.e., small perturbations invisible to humans crafted to deliberately
mislead the model. Consequently, the LPIPS metric is also sensitive to such
adversarial examples. This susceptibility introduces significant security
concerns, especially considering the widespread adoption of LPIPS in
large-scale applications. In this paper, we propose the Robust Learned
Perceptual Image Patch Similarity (R-LPIPS) metric, a new metric that leverages
adversarially trained deep features. Through a comprehensive set of
experiments, we demonstrate the superiority of R-LPIPS compared to the
classical LPIPS metric. The code is available at
https://github.com/SaraGhazanfari/R-LPIPS.
Related papers
- Chasing Better Deep Image Priors between Over- and Under-parameterization [63.8954152220162]
We study a novel "lottery image prior" (LIP) by exploiting DNN inherent sparsity.
LIPworks significantly outperform deep decoders under comparably compact model sizes.
We also extend LIP to compressive sensing image reconstruction, where a pre-trained GAN generator is used as the prior.
arXiv Detail & Related papers (2024-10-31T17:49:44Z) - CSIM: A Copula-based similarity index sensitive to local changes for Image quality assessment [2.3874115898130865]
Image similarity metrics play an important role in computer vision applications, as they are used in image processing, computer vision and machine learning.
Existing metrics, such as PSNR, MSE, SSIM, ISSM and FSIM, often face limitations in terms of either speed, complexity or sensitivity to small changes in images.
A novel image similarity metric, namely CSIM, that combines real-time while being sensitive to subtle image variations is investigated in this paper.
arXiv Detail & Related papers (2024-10-02T10:46:05Z) - LipSim: A Provably Robust Perceptual Similarity Metric [56.03417732498859]
We show the vulnerability of state-of-the-art perceptual similarity metrics based on an ensemble of ViT-based feature extractors to adversarial attacks.
We then propose a framework to train a robust perceptual similarity metric called LipSim with provable guarantees.
LipSim provides guarded areas around each data point and certificates for all perturbations within an $ell$ ball.
arXiv Detail & Related papers (2023-10-27T16:59:51Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - Compact multi-scale periocular recognition using SAFE features [63.48764893706088]
We present a new approach for periocular recognition based on the Symmetry Assessment by Feature Expansion (SAFE) descriptor.
We use the sclera center as single key point for feature extraction, highlighting the object-like identity properties that concentrates to this point unique of the eye.
arXiv Detail & Related papers (2022-10-18T11:46:38Z) - Shift-tolerant Perceptual Similarity Metric [5.326626090397465]
Existing perceptual similarity metrics assume an image and its reference are well aligned.
This paper studies the effect of small misalignment, specifically a small shift between the input and reference image, on existing metrics.
We develop a new deep neural network-based perceptual similarity metric.
arXiv Detail & Related papers (2022-07-27T17:55:04Z) - Identifying and Mitigating Flaws of Deep Perceptual Similarity Metrics [1.484528358552186]
This work investigates the benefits and flaws of the Deep Perceptual Similarity (DPS) metric.
The metrics are analyzed in-depth to understand the strengths and weaknesses of the metrics.
This work contributes with new insights into the flaws of DPS, and further suggests improvements to the metrics.
arXiv Detail & Related papers (2022-07-06T08:28:39Z) - Introspective Deep Metric Learning for Image Retrieval [80.29866561553483]
We argue that a good similarity model should consider the semantic discrepancies with caution to better deal with ambiguous images for more robust training.
We propose to represent an image using not only a semantic embedding but also an accompanying uncertainty embedding, which describes the semantic characteristics and ambiguity of an image, respectively.
The proposed IDML framework improves the performance of deep metric learning through uncertainty modeling and attains state-of-the-art results on the widely used CUB-200-2011, Cars196, and Stanford Online Products datasets.
arXiv Detail & Related papers (2022-05-09T17:51:44Z) - Towards Imperceptible Query-limited Adversarial Attacks with Perceptual
Feature Fidelity Loss [3.351714665243138]
In this work, we propose a novel perceptual metric utilizing the well-established connection between the low-level image feature fidelity and human visual sensitivity.
We show that our metric can robustly reflect and describe the imperceptibility of the generated adversarial images validated in various conditions.
arXiv Detail & Related papers (2021-01-31T13:32:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.