Partial Shape Similarity via Alignment of Multi-Metric Hamiltonian
Spectra
- URL: http://arxiv.org/abs/2207.03018v1
- Date: Thu, 7 Jul 2022 00:03:50 GMT
- Title: Partial Shape Similarity via Alignment of Multi-Metric Hamiltonian
Spectra
- Authors: David Bensa\"id, Amit Bracha, Ron Kimmel
- Abstract summary: We propose a novel axiomatic method to match similar regions across shapes.
Matching similar regions is formulated as the alignment of the spectra of operators closely related to the Laplace-Beltrami operator (LBO)
We show that matching these dual spectra outperforms competing axiomatic frameworks when tested on standard benchmarks.
- Score: 10.74981839055037
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Evaluating the similarity of non-rigid shapes with significant partiality is
a fundamental task in numerous computer vision applications. Here, we propose a
novel axiomatic method to match similar regions across shapes. Matching similar
regions is formulated as the alignment of the spectra of operators closely
related to the Laplace-Beltrami operator (LBO). The main novelty of the
proposed approach is the consideration of differential operators defined on a
manifold with multiple metrics. The choice of a metric relates to fundamental
shape properties while considering the same manifold under different metrics
can thus be viewed as analyzing the underlying manifold from different
perspectives. Specifically, we examine the scale-invariant metric and the
corresponding scale-invariant Laplace-Beltrami operator (SI-LBO) along with the
regular metric and the regular LBO. We demonstrate that the scale-invariant
metric emphasizes the locations of important semantic features in articulated
shapes. A truncated spectrum of the SI-LBO consequently better captures locally
curved regions and complements the global information encapsulated in the
truncated spectrum of the regular LBO. We show that matching these dual spectra
outperforms competing axiomatic frameworks when tested on standard benchmarks.
We introduced a new dataset and compare the proposed method with the
state-of-the-art learning based approach in a cross-database configuration.
Specifically, we show that, when trained on one data set and tested on another,
the proposed axiomatic approach which does not involve training, outperforms
the deep learning alternative.
Related papers
- LBONet: Supervised Spectral Descriptors for Shape Analysis [2.7762142076121052]
The Laplace-Beltrami operator has established itself in the field of non-rigid shape analysis.
This paper proposes a supervised way to learn several operators on a manifold.
By applying these functions, we can train the LBO eigenbasis to be more task-specific.
arXiv Detail & Related papers (2024-11-13T00:49:05Z) - Cluster-Aware Similarity Diffusion for Instance Retrieval [64.40171728912702]
Diffusion-based re-ranking is a common method used for retrieving instances by performing similarity propagation in a nearest neighbor graph.
We propose a novel Cluster-Aware Similarity (CAS) diffusion for instance retrieval.
arXiv Detail & Related papers (2024-06-04T14:19:50Z) - Duality of Bures and Shape Distances with Implications for Comparing
Neural Representations [6.698235069945606]
A multitude of (dis)similarity measures between neural network representations have been proposed, resulting in a fragmented research landscape.
First, measures such as linear regression, canonical correlations analysis (CCA), and shape distances, all learn explicit mappings between neural units to quantify similarity.
Second, measures such as representational similarity analysis (RSA), centered kernel alignment (CKA), and normalized Bures similarity (NBS) all quantify similarity in summary statistics.
arXiv Detail & Related papers (2023-11-19T22:17:09Z) - Counting Like Human: Anthropoid Crowd Counting on Modeling the
Similarity of Objects [92.80955339180119]
mainstream crowd counting methods regress density map and integrate it to obtain counting results.
Inspired by this, we propose a rational and anthropoid crowd counting framework.
arXiv Detail & Related papers (2022-12-02T07:00:53Z) - Approximating Intersections and Differences Between Linear Statistical
Shape Models Using Markov Chain Monte Carlo [5.8691349601057325]
We present a new method to compare two linear SSMs in dense correspondence.
We approximate the distribution of shapes lying in the intersection space using Markov chain Monte Carlo.
We showcase the proposed algorithm qualitatively by computing and analyzing intersection spaces and differences between publicly available face models.
arXiv Detail & Related papers (2022-11-29T15:54:34Z) - Linear Connectivity Reveals Generalization Strategies [54.947772002394736]
Some pairs of finetuned models have large barriers of increasing loss on the linear paths between them.
We find distinct clusters of models which are linearly connected on the test loss surface, but are disconnected from models outside the cluster.
Our work demonstrates how the geometry of the loss surface can guide models towards different functions.
arXiv Detail & Related papers (2022-05-24T23:43:02Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - Log-Euclidean Signatures for Intrinsic Distances Between Unaligned
Datasets [47.20862716252927]
We use manifold learning to compare the intrinsic geometric structures of different datasets.
We define a new theoretically-motivated distance based on a lower bound of the log-Euclidean metric.
arXiv Detail & Related papers (2022-02-03T16:37:23Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - Learning Flat Latent Manifolds with VAEs [16.725880610265378]
We propose an extension to the framework of variational auto-encoders, where the Euclidean metric is a proxy for the similarity between data points.
We replace the compact prior typically used in variational auto-encoders with a recently presented, more expressive hierarchical one.
We evaluate our method on a range of data-sets, including a video-tracking benchmark.
arXiv Detail & Related papers (2020-02-12T09:54:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.