Non-parametric spatially constrained local prior for scene parsing on
real-world data
- URL: http://arxiv.org/abs/2006.12874v1
- Date: Tue, 23 Jun 2020 10:12:08 GMT
- Title: Non-parametric spatially constrained local prior for scene parsing on
real-world data
- Authors: Ligang Zhang
- Abstract summary: We present the non-parametric Spatially Constrained Local Prior (SCLP) for scene parsing on realistic data.
The SCLP is powerful in capturing both long- and short-range context about inter-object correlations in the query image.
Our experiments on the SIFT Flow and PASCAL-Context benchmark datasets show that the SCLP used in conjunction with superpixel-level visual features achieves one of the top performance compared with state-of-the-art approaches.
- Score: 4.035753155957698
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scene parsing aims to recognize the object category of every pixel in scene
images, and it plays a central role in image content understanding and computer
vision applications. However, accurate scene parsing from unconstrained
real-world data is still a challenging task. In this paper, we present the
non-parametric Spatially Constrained Local Prior (SCLP) for scene parsing on
realistic data. For a given query image, the non-parametric SCLP is learnt by
first retrieving a subset of most similar training images to the query image
and then collecting prior information about object co-occurrence statistics
between spatial image blocks and between adjacent superpixels from the
retrieved subset. The SCLP is powerful in capturing both long- and short-range
context about inter-object correlations in the query image and can be
effectively integrated with traditional visual features to refine the
classification results. Our experiments on the SIFT Flow and PASCAL-Context
benchmark datasets show that the non-parametric SCLP used in conjunction with
superpixel-level visual features achieves one of the top performance compared
with state-of-the-art approaches.
Related papers
- Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models [21.17975741743583]
It has recently been discovered that using a pre-trained vision-language model (VLM), e.g., CLIP, to align a whole query image with several finer text descriptions can significantly enhance zero-shot performance.
In this paper, we empirically find that the finer descriptions tend to align more effectively with local areas of the query image rather than the whole image.
arXiv Detail & Related papers (2024-06-05T04:08:41Z) - Learning-based Relational Object Matching Across Views [63.63338392484501]
We propose a learning-based approach which combines local keypoints with novel object-level features for matching object detections between RGB images.
We train our object-level matching features based on appearance and inter-frame and cross-frame spatial relations between objects in an associative graph neural network.
arXiv Detail & Related papers (2023-05-03T19:36:51Z) - Location-Aware Self-Supervised Transformers [74.76585889813207]
We propose to pretrain networks for semantic segmentation by predicting the relative location of image parts.
We control the difficulty of the task by masking a subset of the reference patch features visible to those of the query.
Our experiments show that this location-aware pretraining leads to representations that transfer competitively to several challenging semantic segmentation benchmarks.
arXiv Detail & Related papers (2022-12-05T16:24:29Z) - Investigating the Role of Image Retrieval for Visual Localization -- An
exhaustive benchmark [46.166955777187816]
This paper focuses on understanding the role of image retrieval for multiple visual localization paradigms.
We introduce a novel benchmark setup and compare state-of-the-art retrieval representations on multiple datasets.
Using these tools and in-depth analysis, we show that retrieval performance on classical landmark retrieval or place recognition tasks correlates only for some but not all paradigms to localization performance.
arXiv Detail & Related papers (2022-05-31T12:59:01Z) - CRIS: CLIP-Driven Referring Image Segmentation [71.56466057776086]
We propose an end-to-end CLIP-Driven Referring Image framework (CRIS)
CRIS resorts to vision-language decoding and contrastive learning for achieving the text-to-pixel alignment.
Our proposed framework significantly outperforms the state-of-the-art performance without any post-processing.
arXiv Detail & Related papers (2021-11-30T07:29:08Z) - SGMNet: Scene Graph Matching Network for Few-Shot Remote Sensing Scene
Classification [14.016637774748677]
Few-Shot Remote Sensing Scene Classification (FSRSSC) is an important task, which aims to recognize novel scene classes with few examples.
We propose a novel scene graph matching-based meta-learning framework for FSRSSC, called SGMNet.
We conduct extensive experiments on UCMerced LandUse, WHU19, AID, and NWPU-RESISC45 datasets.
arXiv Detail & Related papers (2021-10-09T07:43:40Z) - Mining Contextual Information Beyond Image for Semantic Segmentation [37.783233906684444]
The paper studies the context aggregation problem in semantic image segmentation.
It proposes to mine the contextual information beyond individual images to further augment the pixel representations.
The proposed method could be effortlessly incorporated into existing segmentation frameworks.
arXiv Detail & Related papers (2021-08-26T14:34:23Z) - Exploiting the relationship between visual and textual features in
social networks for image classification with zero-shot deep learning [0.0]
In this work, we propose a classifier ensemble based on the transferable learning capabilities of the CLIP neural network architecture.
Our experiments, based on image classification tasks according to the labels of the Places dataset, are performed by first considering only the visual part.
Considering the associated texts to the images can help to improve the accuracy depending on the goal.
arXiv Detail & Related papers (2021-07-08T10:54:59Z) - Salient Objects in Clutter [130.63976772770368]
This paper identifies and addresses a serious design bias of existing salient object detection (SOD) datasets.
This design bias has led to a saturation in performance for state-of-the-art SOD models when evaluated on existing datasets.
We propose a new high-quality dataset and update the previous saliency benchmark.
arXiv Detail & Related papers (2021-05-07T03:49:26Z) - Scaling Up Visual and Vision-Language Representation Learning With Noisy
Text Supervision [57.031588264841]
We leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps.
A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss.
We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme.
arXiv Detail & Related papers (2021-02-11T10:08:12Z) - Geometrically Mappable Image Features [85.81073893916414]
Vision-based localization of an agent in a map is an important problem in robotics and computer vision.
We propose a method that learns image features targeted for image-retrieval-based localization.
arXiv Detail & Related papers (2020-03-21T15:36:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.