The 2021 Image Similarity Dataset and Challenge
- URL: http://arxiv.org/abs/2106.09672v1
- Date: Thu, 17 Jun 2021 17:23:59 GMT
- Title: The 2021 Image Similarity Dataset and Challenge
- Authors: Matthijs Douze and Giorgos Tolias and Ed Pizzi and Zo\"e Papakipos and
Lowik Chanussot and Filip Radenovic and Tomas Jenicek and Maxim Maximov and
Laura Leal-Taix\'e and Ismail Elezi and Ond\v{r}ej Chum and Cristian Canton
Ferrer
- Abstract summary: This paper introduces a new benchmark for large-scale image similarity detection.
The goal is to determine whether a query image is a modified copy of any image in a reference corpus of size 1million.
- Score: 32.202821997745716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces a new benchmark for large-scale image similarity
detection. This benchmark is used for the Image Similarity Challenge at
NeurIPS'21 (ISC2021). The goal is to determine whether a query image is a
modified copy of any image in a reference corpus of size 1~million. The
benchmark features a variety of image transformations such as automated
transformations, hand-crafted image edits and machine-learning based
manipulations. This mimics real-life cases appearing in social media, for
example for integrity-related problems dealing with misinformation and
objectionable content. The strength of the image manipulations, and therefore
the difficulty of the benchmark, is calibrated according to the performance of
a set of baseline approaches. Both the query and reference set contain a
majority of ``distractor'' images that do not match, which corresponds to a
real-life needle-in-haystack setting, and the evaluation metric reflects that.
We expect the DISC21 benchmark to promote image copy detection as an important
and challenging computer vision task and refresh the state of the art.
Related papers
- GlobalMamba: Global Image Serialization for Vision Mamba [73.50475621164037]
Vision mambas have demonstrated strong performance with linear complexity to the number of vision tokens.
Most existing methods employ patch-based image tokenization and then flatten them into 1D sequences for causal processing.
We propose a global image serialization method to transform the image into a sequence of causal tokens.
arXiv Detail & Related papers (2024-10-14T09:19:05Z) - CSIM: A Copula-based similarity index sensitive to local changes for Image quality assessment [2.3874115898130865]
Image similarity metrics play an important role in computer vision applications, as they are used in image processing, computer vision and machine learning.
Existing metrics, such as PSNR, MSE, SSIM, ISSM and FSIM, often face limitations in terms of either speed, complexity or sensitivity to small changes in images.
A novel image similarity metric, namely CSIM, that combines real-time while being sensitive to subtle image variations is investigated in this paper.
arXiv Detail & Related papers (2024-10-02T10:46:05Z) - Interpretable Measures of Conceptual Similarity by
Complexity-Constrained Descriptive Auto-Encoding [112.0878081944858]
Quantifying the degree of similarity between images is a key copyright issue for image-based machine learning.
We seek to define and compute a notion of "conceptual similarity" among images that captures high-level relations.
Two highly dissimilar images can be discriminated early in their description, whereas conceptually dissimilar ones will need more detail to be distinguished.
arXiv Detail & Related papers (2024-02-14T03:31:17Z) - Active Image Indexing [26.33727468288776]
This paper improves the robustness of image copy detection with active indexing.
We reduce the quantization loss of a given image representation by making imperceptible changes to the image before its release.
Experiments show that the retrieval and copy detection of activated images is significantly improved.
arXiv Detail & Related papers (2022-10-05T17:55:15Z) - SImProv: Scalable Image Provenance Framework for Robust Content
Attribution [80.25476792081403]
We present SImProv, a framework to match a query image back to a trusted database of originals.
SimProv consists of three stages: a scalable search stage for retrieving top-k most similar images; a re-ranking and near-duplicated detection stage for identifying the original among the candidates.
We demonstrate effective retrieval and manipulation detection over a dataset of 100 million images.
arXiv Detail & Related papers (2022-06-28T18:42:36Z) - Results and findings of the 2021 Image Similarity Challenge [43.79331237080075]
The 2021 Image Similarity Challenge introduced a dataset to serve as a new benchmark to evaluate recent image copy detection methods.
This paper presents a quantitative and qualitative analysis of the top submissions.
arXiv Detail & Related papers (2022-02-08T17:23:32Z) - Compact Binary Fingerprint for Image Copy Re-Ranking [0.0]
Image copy detection is challenging and appealing topic in computer vision and signal processing.
Local keypoint descriptors such as SIFT are used to represent the images, and based on those descriptors matching, images are matched and retrieved.
Features are quantized so that searching/matching may be made feasible for large databases at the cost of accuracy loss.
arXiv Detail & Related papers (2021-09-16T08:44:56Z) - Enjoy Your Editing: Controllable GANs for Image Editing via Latent Space
Navigation [136.53288628437355]
Controllable semantic image editing enables a user to change entire image attributes with few clicks.
Current approaches often suffer from attribute edits that are entangled, global image identity changes, and diminished photo-realism.
We propose quantitative evaluation strategies for measuring controllable editing performance, unlike prior work which primarily focuses on qualitative evaluation.
arXiv Detail & Related papers (2021-02-01T21:38:36Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Learning Transformation-Aware Embeddings for Image Forensics [15.484408315588569]
Image Provenance Analysis aims at discovering relationships among different manipulated image versions that share content.
One of the main sub-problems for provenance analysis that has not yet been addressed directly is the edit ordering of images that share full content or are near-duplicates.
This paper introduces a novel deep learning-based approach to provide a plausible ordering to images that have been generated from a single image through transformations.
arXiv Detail & Related papers (2020-01-13T22:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.