Unsupervised Object Localization with Representer Point Selection
- URL: http://arxiv.org/abs/2309.04172v1
- Date: Fri, 8 Sep 2023 07:38:52 GMT
- Title: Unsupervised Object Localization with Representer Point Selection
- Authors: Yeonghwan Song, Seokwoo Jang, Dina Katabi, Jeany Son
- Abstract summary: We propose a novel unsupervised object localization method that allows us to explain the predictions of the model without additional finetuning.
Our method outperforms the state-of-the-art unsupervised and self-supervised object localization methods on various datasets with significant margins.
- Score: 19.794650465591683
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel unsupervised object localization method that allows us to
explain the predictions of the model by utilizing self-supervised pre-trained
models without additional finetuning. Existing unsupervised and self-supervised
object localization methods often utilize class-agnostic activation maps or
self-similarity maps of a pre-trained model. Although these maps can offer
valuable information for localization, their limited ability to explain how the
model makes predictions remains challenging. In this paper, we propose a simple
yet effective unsupervised object localization method based on representer
point selection, where the predictions of the model can be represented as a
linear combination of representer values of training points. By selecting
representer points, which are the most important examples for the model
predictions, our model can provide insights into how the model predicts the
foreground object by providing relevant examples as well as their importance.
Our method outperforms the state-of-the-art unsupervised and self-supervised
object localization methods on various datasets with significant margins and
even outperforms recent weakly supervised and few-shot methods.
Related papers
- Towards Unsupervised Model Selection for Domain Adaptive Object Detection [27.551367463011008]
We propose an unsupervised model selection approach for domain adaptive object detection.
Our approach is based on the flat minima principle, i.e., models located in the flat minima region in the parameter space usually exhibit excellent generalization ability.
We show via a generalization bound that the flatness can be deemed as model variance, while the minima depend on the domain distribution distance for the DetectionD task.
arXiv Detail & Related papers (2024-12-23T05:06:26Z) - Zero-Shot Object-Centric Representation Learning [72.43369950684057]
We study current object-centric methods through the lens of zero-shot generalization.
We introduce a benchmark comprising eight different synthetic and real-world datasets.
We find that training on diverse real-world images improves transferability to unseen scenarios.
arXiv Detail & Related papers (2024-08-17T10:37:07Z) - Weakly-supervised Contrastive Learning for Unsupervised Object Discovery [52.696041556640516]
Unsupervised object discovery is promising due to its ability to discover objects in a generic manner.
We design a semantic-guided self-supervised learning model to extract high-level semantic features from images.
We introduce Principal Component Analysis (PCA) to localize object regions.
arXiv Detail & Related papers (2023-07-07T04:03:48Z) - Representer Point Selection for Explaining Regularized High-dimensional
Models [105.75758452952357]
We introduce a class of sample-based explanations we term high-dimensional representers.
Our workhorse is a novel representer theorem for general regularized high-dimensional models.
We study the empirical performance of our proposed methods on three real-world binary classification datasets and two recommender system datasets.
arXiv Detail & Related papers (2023-05-31T16:23:58Z) - Evaluating Representations with Readout Model Switching [19.907607374144167]
In this paper, we propose to use the Minimum Description Length (MDL) principle to devise an evaluation metric.
We design a hybrid discrete and continuous-valued model space for the readout models and employ a switching strategy to combine their predictions.
The proposed metric can be efficiently computed with an online method and we present results for pre-trained vision encoders of various architectures.
arXiv Detail & Related papers (2023-02-19T14:08:01Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Distributional Depth-Based Estimation of Object Articulation Models [21.046351215949525]
We propose a method that efficiently learns distributions over articulation model parameters directly from depth images.
Our core contributions include a novel representation for distributions over rigid body transformations.
We introduce a novel deep learning based approach, DUST-net, that performs category-independent articulation model estimation.
arXiv Detail & Related papers (2021-08-12T17:44:51Z) - Attentional Prototype Inference for Few-Shot Segmentation [128.45753577331422]
We propose attentional prototype inference (API), a probabilistic latent variable framework for few-shot segmentation.
We define a global latent variable to represent the prototype of each object category, which we model as a probabilistic distribution.
We conduct extensive experiments on four benchmarks, where our proposal obtains at least competitive and often better performance than state-of-the-art prototype-based methods.
arXiv Detail & Related papers (2021-05-14T06:58:44Z) - Building Reliable Explanations of Unreliable Neural Networks: Locally
Smoothing Perspective of Model Interpretation [0.0]
We present a novel method for reliably explaining the predictions of neural networks.
Our method is built on top of the assumption of smooth landscape in a loss function of the model prediction.
arXiv Detail & Related papers (2021-03-26T08:52:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.