Image Co-skeletonization via Co-segmentation
- URL: http://arxiv.org/abs/2004.05575v1
- Date: Sun, 12 Apr 2020 09:35:54 GMT
- Title: Image Co-skeletonization via Co-segmentation
- Authors: Koteswar Rao Jerripothula, Jianfei Cai, Jiangbo Lu, Junsong Yuan
- Abstract summary: We propose a new joint processing topic: image co-skeletonization.
Object skeletonization in a single natural image is a challenging problem because there is hardly any prior knowledge about the object.
We propose a coupled framework for co-skeletonization and co-segmentation tasks so that they are well informed by each other.
- Score: 102.59781674888657
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in the joint processing of images have certainly shown its
advantages over individual processing. Different from the existing works geared
towards co-segmentation or co-localization, in this paper, we explore a new
joint processing topic: image co-skeletonization, which is defined as joint
skeleton extraction of objects in an image collection. Object skeletonization
in a single natural image is a challenging problem because there is hardly any
prior knowledge about the object. Therefore, we resort to the idea of object
co-skeletonization, hoping that the commonness prior that exists across the
images may help, just as it does for other joint processing problems such as
co-segmentation. We observe that the skeleton can provide good scribbles for
segmentation, and skeletonization, in turn, needs good segmentation. Therefore,
we propose a coupled framework for co-skeletonization and co-segmentation tasks
so that they are well informed by each other, and benefit each other
synergistically. Since it is a new problem, we also construct a benchmark
dataset by annotating nearly 1.8k images spread across 38 categories. Extensive
experiments demonstrate that the proposed method achieves promising results in
all the three possible scenarios of joint-processing: weakly-supervised,
supervised, and unsupervised.
Related papers
- Language-free Compositional Action Generation via Decoupling Refinement [67.50452446686725]
We introduce a novel framework to generate compositional actions without reliance on language auxiliaries.
Our approach consists of three main components: Action Coupling, Conditional Action Generation, and Decoupling Refinement.
arXiv Detail & Related papers (2023-07-07T12:00:38Z) - AIMS: All-Inclusive Multi-Level Segmentation [93.5041381700744]
We propose a new task, All-Inclusive Multi-Level (AIMS), which segments visual regions into three levels: part, entity, and relation.
We also build a unified AIMS model through multi-dataset multi-task training to address the two major challenges of annotation inconsistency and task correlation.
arXiv Detail & Related papers (2023-05-28T16:28:49Z) - Weakly-Supervised 3D Medical Image Segmentation using Geometric Prior
and Contrastive Similarity [19.692257159373373]
We propose a simple yet effective segmentation framework that incorporates the geometric prior and contrastive similarity.
The proposed framework is superior to state-of-the-art weakly-supervised methods on publicly accessible datasets.
arXiv Detail & Related papers (2023-02-04T07:55:30Z) - Joint reconstruction-segmentation on graphs [0.7829352305480285]
We present a method for joint reconstruction-segmentation using graph-based segmentation methods.
Complications arise due to the large size of the matrices involved, and we show how these complications can be managed.
We apply this scheme to distorted versions of two cows'' images familiar from previous graph-based segmentation literature.
arXiv Detail & Related papers (2022-08-11T14:01:38Z) - Comprehensive Saliency Fusion for Object Co-segmentation [3.908842679355254]
Saliency fusion has been one of the promising ways to carry out object co-segmentation.
This paper revisits the problem and proposes fusing saliency maps of both the same image and different images.
It also leverages advances in deep learning for the saliency extraction and correspondence processes.
arXiv Detail & Related papers (2022-01-30T14:22:58Z) - Unsupervised Part Discovery from Contrastive Reconstruction [90.88501867321573]
The goal of self-supervised visual representation learning is to learn strong, transferable image representations.
We propose an unsupervised approach to object part discovery and segmentation.
Our method yields semantic parts consistent across fine-grained but visually distinct categories.
arXiv Detail & Related papers (2021-11-11T17:59:42Z) - Skeleton-Aware Networks for Deep Motion Retargeting [83.65593033474384]
We introduce a novel deep learning framework for data-driven motion between skeletons.
Our approach learns how to retarget without requiring any explicit pairing between the motions in the training set.
arXiv Detail & Related papers (2020-05-12T12:51:40Z) - Peeking into occluded joints: A novel framework for crowd pose
estimation [88.56203133287865]
OPEC-Net is an Image-Guided Progressive GCN module that estimates invisible joints from an inference perspective.
OCPose is the most complex Occluded Pose dataset with respect to average IoU between adjacent instances.
arXiv Detail & Related papers (2020-03-23T19:32:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.