Eigencontours: Novel Contour Descriptors Based on Low-Rank Approximation
- URL: http://arxiv.org/abs/2203.15259v1
- Date: Tue, 29 Mar 2022 06:14:38 GMT
- Title: Eigencontours: Novel Contour Descriptors Based on Low-Rank Approximation
- Authors: Wonhui Park, Dongkwon Jin, Chang-Su Kim
- Abstract summary: We decompose a contour matrix into eigencontours via the best rank-M approximation.
We represent an object boundary by a linear combination of the M eigencontours.
The proposed algorithm yields meaningful performances on instance segmentation datasets.
- Score: 40.13463458124477
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Novel contour descriptors, called eigencontours, based on low-rank
approximation are proposed in this paper. First, we construct a contour matrix
containing all object boundaries in a training set. Second, we decompose the
contour matrix into eigencontours via the best rank-M approximation. Third, we
represent an object boundary by a linear combination of the M eigencontours. We
also incorporate the eigencontours into an instance segmentation framework.
Experimental results demonstrate that the proposed eigencontours can represent
object boundaries more effectively and more efficiently than existing
descriptors in a low-dimensional space. Furthermore, the proposed algorithm
yields meaningful performances on instance segmentation datasets.
Related papers
- SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.
Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.
In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - AdaContour: Adaptive Contour Descriptor with Hierarchical Representation [52.381359663689004]
Existing angle-based contour descriptors suffer from lossy representation for non-star shapes.
AdaCon is able to represent shapes more accurately robustly than other descriptors.
arXiv Detail & Related papers (2024-04-12T07:30:24Z) - Dual feature-based and example-based explanation methods [2.024925013349319]
A new approach to the local and global explanation is proposed.
It is based on selecting a convex hull constructed for the finite number of points around an explained instance.
The code of proposed algorithms is available.
arXiv Detail & Related papers (2024-01-29T16:53:04Z) - Mode-wise Principal Subspace Pursuit and Matrix Spiked Covariance Model [13.082805815235975]
We introduce a novel framework called Mode-wise Principal Subspace Pursuit (MOP-UP) to extract hidden variations in both the row and column dimensions for matrix data.
The effectiveness and practical merits of the proposed framework are demonstrated through experiments on both simulated and real datasets.
arXiv Detail & Related papers (2023-07-02T13:59:47Z) - Semi-Supervised Clustering via Dynamic Graph Structure Learning [12.687613487964088]
Most existing semi-supervised graph-based clustering methods exploit the supervisory information by refining the affinity matrix or constraining the low-dimensional representations of data points.
We propose a novel dynamic graph learning method for semi-supervised graph clustering.
arXiv Detail & Related papers (2022-09-06T14:05:31Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Graph Partitioning and Sparse Matrix Ordering using Reinforcement
Learning [0.13999481573773068]
We present a novel method for graph partitioning based on reinforcement learning and graph convolutional neural networks.
The proposed method achieves similar partitioning quality than METIS and Scotch.
The method generalizes from one class of graphs to another, and works well on a variety of graphs from the SuiteSparse sparse matrix collection.
arXiv Detail & Related papers (2021-04-08T06:54:24Z) - Hybrid Trilinear and Bilinear Programming for Aligning Partially
Overlapping Point Sets [85.71360365315128]
In many applications, we need algorithms which can align partially overlapping point sets are invariant to the corresponding corresponding RPM algorithm.
We first show that the objective is a cubic bound function. We then utilize the convex envelopes of trilinear and bilinear monomial transformations to derive its lower bound.
We next develop a branch-and-bound (BnB) algorithm which only branches over the transformation variables and runs efficiently.
arXiv Detail & Related papers (2021-01-19T04:24:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.