Image Similarity using An Ensemble of Context-Sensitive Models
- URL: http://arxiv.org/abs/2401.07951v1
- Date: Mon, 15 Jan 2024 20:23:05 GMT
- Title: Image Similarity using An Ensemble of Context-Sensitive Models
- Authors: Zukang Liao and Min Chen
- Abstract summary: In labelling similarity, assigning a numerical score to a pair of images is less intuitive than determining if an image A is closer to a reference image R than another image B.
We present a novel approach for building an image similarity model based on labelled data in the form of A:R vs B:R.
- Score: 3.4839256836124624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image similarity has been extensively studied in computer vision. In recently
years, machine-learned models have shown their ability to encode more semantics
than traditional multivariate metrics. However, in labelling similarity,
assigning a numerical score to a pair of images is less intuitive than
determining if an image A is closer to a reference image R than another image
B. In this work, we present a novel approach for building an image similarity
model based on labelled data in the form of A:R vs B:R. We address the
challenges of sparse sampling in the image space (R, A, B) and biases in the
models trained with context-based data by using an ensemble model. In
particular, we employed two ML techniques to construct such an ensemble model,
namely dimensionality reduction and MLP regressors. Our testing results show
that the ensemble model constructed performs ~5% better than the best
individual context-sensitive models. They also performed better than the model
trained with mixed imagery data as well as existing similarity models, e.g.,
CLIP and DINO. This work demonstrate that context-based labelling and model
training can be effective when an appropriate ensemble approach is used to
alleviate the limitation due to sparse sampling.
Related papers
- Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Semantic Approach to Quantifying the Consistency of Diffusion Model Image Generation [0.40792653193642503]
We identify the need for an interpretable, quantitative score of the repeatability, or consistency, of image generation in diffusion models.
We propose a semantic approach, using a pairwise mean CLIP score as our semantic consistency score.
arXiv Detail & Related papers (2024-04-12T20:16:03Z) - Masked Images Are Counterfactual Samples for Robust Fine-tuning [77.82348472169335]
Fine-tuning deep learning models can lead to a trade-off between in-distribution (ID) performance and out-of-distribution (OOD) robustness.
We propose a novel fine-tuning method, which uses masked images as counterfactual samples that help improve the robustness of the fine-tuning model.
arXiv Detail & Related papers (2023-03-06T11:51:28Z) - IMACS: Image Model Attribution Comparison Summaries [16.80986701058596]
We introduce IMACS, a method that combines gradient-based model attributions with aggregation and visualization techniques.
IMACS extracts salient input features from an evaluation dataset, clusters them based on similarity, then visualizes differences in model attributions for similar input features.
We show how our technique can uncover behavioral differences caused by domain shift between two models trained on satellite images.
arXiv Detail & Related papers (2022-01-26T21:35:14Z) - Meta Internal Learning [88.68276505511922]
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
arXiv Detail & Related papers (2021-10-06T16:27:38Z) - NP-DRAW: A Non-Parametric Structured Latent Variable Modelfor Image
Generation [139.8037697822064]
We present a non-parametric structured latent variable model for image generation, called NP-DRAW.
It sequentially draws on a latent canvas in a part-by-part fashion and then decodes the image from the canvas.
arXiv Detail & Related papers (2021-06-25T05:17:55Z) - Evaluating Contrastive Models for Instance-based Image Retrieval [6.393147386784114]
We evaluate contrastive models for the task of image retrieval.
We find that models trained using contrastive methods perform on-par with (and outperforms) a pre-trained baseline trained on the ImageNet labels.
arXiv Detail & Related papers (2021-04-30T12:05:23Z) - An application of a pseudo-parabolic modeling to texture image
recognition [0.0]
We present a novel methodology for texture image recognition using a partial differential equation modeling.
We employ the pseudo-parabolic Buckley-Leverett equation to provide a dynamics to the digital image representation and collect local descriptors from those images evolving in time.
arXiv Detail & Related papers (2021-02-09T18:08:42Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z) - Robust Finite Mixture Regression for Heterogeneous Targets [70.19798470463378]
We propose an FMR model that finds sample clusters and jointly models multiple incomplete mixed-type targets simultaneously.
We provide non-asymptotic oracle performance bounds for our model under a high-dimensional learning framework.
The results show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-10-12T03:27:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.