An Efficiently Coupled Shape and Appearance Prior for Active Contour
Segmentation
- URL: http://arxiv.org/abs/2103.14887v2
- Date: Wed, 31 Mar 2021 00:45:20 GMT
- Title: An Efficiently Coupled Shape and Appearance Prior for Active Contour
Segmentation
- Authors: Martin Mueller and Navdeep Dahiya and Anthony Yezzi
- Abstract summary: This paper proposes a novel training model based on shape and appearance features for object segmentation in images and videos.
Our appearance-based feature is a one-dimensional function, which is efficiently coupled with the object's shape by integrating intensities along the object's iso-contours.
Joint PCA training on these shape and appearance features further exploits shape-appearance correlations and the resulting training model is incorporated in an active-contour-type energy functional for recognition-segmentation tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a novel training model based on shape and appearance
features for object segmentation in images and videos. Whereas most such models
rely on two-dimensional appearance templates or a finite set of descriptors,
our appearance-based feature is a one-dimensional function, which is
efficiently coupled with the object's shape by integrating intensities along
the object's iso-contours. Joint PCA training on these shape and appearance
features further exploits shape-appearance correlations and the resulting
training model is incorporated in an active-contour-type energy functional for
recognition-segmentation tasks. Experiments on synthetic and infrared images
demonstrate how this shape and appearance training model improves accuracy
compared to methods based on the Chan-Vese energy.
Related papers
- Detection Based Part-level Articulated Object Reconstruction from Single RGBD Image [52.11275397911693]
We propose an end-to-end trainable, cross-category method for reconstructing multiple man-made articulated objects from a single RGBD image.
We depart from previous works that rely on learning instance-level latent space, focusing on man-made articulated objects with predefined part counts.
Our method successfully reconstructs variously structured multiple instances that previous works cannot handle, and outperforms prior works in shape reconstruction and kinematics estimation.
arXiv Detail & Related papers (2025-04-04T05:08:04Z) - An End-to-End Deep Learning Generative Framework for Refinable Shape
Matching and Generation [45.820901263103806]
Generative modelling for shapes is a prerequisite for In-Silico Clinical Trials (ISCTs)
We develop a novel unsupervised geometric deep-learning model to establish refinable shape correspondences in a latent space.
We extend our proposed base model to a joint shape generative-clustering multi-atlas framework to incorporate further variability.
arXiv Detail & Related papers (2024-03-10T21:33:53Z) - pix2gestalt: Amodal Segmentation by Synthesizing Wholes [34.45464291259217]
pix2gestalt is a framework for zero-shot amodal segmentation.
We learn a conditional diffusion model for reconstructing whole objects in challenging zero-shot cases.
arXiv Detail & Related papers (2024-01-25T18:57:36Z) - Harnessing Diffusion Models for Visual Perception with Meta Prompts [68.78938846041767]
We propose a simple yet effective scheme to harness a diffusion model for visual perception tasks.
We introduce learnable embeddings (meta prompts) to the pre-trained diffusion models to extract proper features for perception.
Our approach achieves new performance records in depth estimation tasks on NYU depth V2 and KITTI, and in semantic segmentation task on CityScapes.
arXiv Detail & Related papers (2023-12-22T14:40:55Z) - ShapeMatcher: Self-Supervised Joint Shape Canonicalization,
Segmentation, Retrieval and Deformation [47.94499636697971]
We present ShapeMatcher, a unified self-supervised learning framework for joint shape canonicalization, segmentation, retrieval and deformation.
The key insight of ShapeMaker is the simultaneous training of the four highly-associated processes: canonicalization, segmentation, retrieval, and deformation.
arXiv Detail & Related papers (2023-11-18T15:44:57Z) - Shape-centered Representation Learning for Visible-Infrared Person
Re-identification [53.56628297970931]
Current Visible-Infrared Person Re-Identification (VI-ReID) methods prioritize extracting distinguishing appearance features.
We propose the Shape-centered Representation Learning framework (ScRL), which focuses on learning shape features and appearance features associated with shapes.
To acquire appearance features related to shape, we design the Appearance Feature Enhancement (AFE), which accentuates identity-related features while suppressing identity-unrelated features guided by shape features.
arXiv Detail & Related papers (2023-10-27T07:57:24Z) - Shape-Erased Feature Learning for Visible-Infrared Person
Re-Identification [90.39454748065558]
Body shape is one of the significant modality-shared cues for VI-ReID.
We propose shape-erased feature learning paradigm that decorrelates modality-shared features in two subspaces.
Experiments on SYSU-MM01, RegDB, and HITSZ-VCM datasets demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2023-04-09T10:22:10Z) - MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare [84.80956484848505]
MegaPose is a method to estimate the 6D pose of novel objects, that is, objects unseen during training.
We present a 6D pose refiner based on a render&compare strategy which can be applied to novel objects.
Second, we introduce a novel approach for coarse pose estimation which leverages a network trained to classify whether the pose error between a synthetic rendering and an observed image of the same object can be corrected by the refiner.
arXiv Detail & Related papers (2022-12-13T19:30:03Z) - Geo-SIC: Learning Deformable Geometric Shapes in Deep Image Classifiers [8.781861951759948]
This paper presents Geo-SIC, the first deep learning model to learn deformable shapes in a deformation space for an improved performance of image classification.
We introduce a newly designed framework that (i) simultaneously derives features from both image and latent shape spaces with large intra-class variations.
We develop a boosted classification network, equipped with an unsupervised learning of geometric shape representations.
arXiv Detail & Related papers (2022-10-25T01:55:17Z) - Saliency-Driven Active Contour Model for Image Segmentation [2.8348950186890467]
We propose a novel model that uses the advantages of a saliency map with local image information (LIF) and overcomes the drawbacks of previous models.
The proposed model is driven by a saliency map of an image and the local image information to enhance the progress of the active contour models.
arXiv Detail & Related papers (2022-05-23T06:02:52Z) - Cross-Shape Attention for Part Segmentation of 3D Point Clouds [11.437076464287822]
We propose a cross-shape attention mechanism to enable interactions between a shape's point-wise features and those of other shapes.
The mechanism assesses both the degree of interaction between points and also mediates feature propagation across shapes.
Our approach yields state-of-the-art results in the popular PartNet dataset.
arXiv Detail & Related papers (2020-03-20T00:23:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.