Promising or Elusive? Unsupervised Object Segmentation from Real-world
Single Images
- URL: http://arxiv.org/abs/2210.02324v1
- Date: Wed, 5 Oct 2022 15:22:54 GMT
- Title: Promising or Elusive? Unsupervised Object Segmentation from Real-world
Single Images
- Authors: Yafei Yang, Bo Yang
- Abstract summary: We investigate the effectiveness of existing unsupervised models on challenging real-world images.
We find that, not surprisingly, existing unsupervised models fail to segment generic objects in real-world images.
Our research results suggest that future work should exploit more explicit objectness biases in the network design.
- Score: 4.709764624933227
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the problem of unsupervised object segmentation from
single images. We do not introduce a new algorithm, but systematically
investigate the effectiveness of existing unsupervised models on challenging
real-world images. We firstly introduce four complexity factors to
quantitatively measure the distributions of object- and scene-level biases in
appearance and geometry for datasets with human annotations. With the aid of
these factors, we empirically find that, not surprisingly, existing
unsupervised models catastrophically fail to segment generic objects in
real-world images, although they can easily achieve excellent performance on
numerous simple synthetic datasets, due to the vast gap in objectness biases
between synthetic and real images. By conducting extensive experiments on
multiple groups of ablated real-world datasets, we ultimately find that the key
factors underlying the colossal failure of existing unsupervised models on
real-world images are the challenging distributions of object- and scene-level
biases in appearance and geometry. Because of this, the inductive biases
introduced in existing unsupervised models can hardly capture the diverse
object distributions. Our research results suggest that future work should
exploit more explicit objectness biases in the network design.
Related papers
- Zero-Shot Object-Centric Representation Learning [72.43369950684057]
We study current object-centric methods through the lens of zero-shot generalization.
We introduce a benchmark comprising eight different synthetic and real-world datasets.
We find that training on diverse real-world images improves transferability to unseen scenarios.
arXiv Detail & Related papers (2024-08-17T10:37:07Z) - PEEKABOO: Hiding parts of an image for unsupervised object localization [7.161489957025654]
Localizing objects in an unsupervised manner poses significant challenges due to the absence of key visual information.
We propose a single-stage learning framework, dubbed PEEKABOO, for unsupervised object localization.
The key idea is to selectively hide parts of an image and leverage the remaining image information to infer the location of objects without explicit supervision.
arXiv Detail & Related papers (2024-07-24T20:35:20Z) - Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - Benchmarking and Analysis of Unsupervised Object Segmentation from
Real-world Single Images [6.848868644753519]
We investigate the effectiveness of existing unsupervised models on challenging real-world images.
We find that existing unsupervised models fail to segment generic objects in real-world images.
Our research results suggest that future work should exploit more explicit objectness biases in the network design.
arXiv Detail & Related papers (2023-12-08T10:25:59Z) - Bridging the Gap to Real-World Object-Centric Learning [66.55867830853803]
We show that reconstructing features from models trained in a self-supervised manner is a sufficient training signal for object-centric representations to arise in a fully unsupervised way.
Our approach, DINOSAUR, significantly out-performs existing object-centric learning models on simulated data.
arXiv Detail & Related papers (2022-09-29T15:24:47Z) - Discovering Objects that Can Move [55.743225595012966]
We study the problem of object discovery -- separating objects from the background without manual labels.
Existing approaches utilize appearance cues, such as color, texture, and location, to group pixels into object-like regions.
We choose to focus on dynamic objects -- entities that can move independently in the world.
arXiv Detail & Related papers (2022-03-18T21:13:56Z) - Generalization and Robustness Implications in Object-Centric Learning [23.021791024676986]
In this paper, we train state-of-the-art unsupervised models on five common multi-object datasets.
From our experimental study, we find object-centric representations to be generally useful for downstream tasks.
arXiv Detail & Related papers (2021-07-01T17:51:11Z) - Salient Objects in Clutter [130.63976772770368]
This paper identifies and addresses a serious design bias of existing salient object detection (SOD) datasets.
This design bias has led to a saturation in performance for state-of-the-art SOD models when evaluated on existing datasets.
We propose a new high-quality dataset and update the previous saliency benchmark.
arXiv Detail & Related papers (2021-05-07T03:49:26Z) - Contemplating real-world object classification [53.10151901863263]
We reanalyze the ObjectNet dataset recently proposed by Barbu et al. containing objects in daily life situations.
We find that applying deep models to the isolated objects, rather than the entire scene as is done in the original paper, results in around 20-30% performance improvement.
arXiv Detail & Related papers (2021-03-08T23:29:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.