Discrete Morse Sandwich: Fast Computation of Persistence Diagrams for
Scalar Data -- An Algorithm and A Benchmark
- URL: http://arxiv.org/abs/2206.13932v1
- Date: Mon, 27 Jun 2022 10:54:24 GMT
- Title: Discrete Morse Sandwich: Fast Computation of Persistence Diagrams for
Scalar Data -- An Algorithm and A Benchmark
- Authors: Pierre Guillou, Jules Vidal, and Julien Tierny
- Abstract summary: This paper introduces an efficient algorithm for persistence diagram computation, given an input piecewise linear scalar field f defined on a d-dimensional simplicial complex K.
We express this algorithm within the setting of discrete Morse theory, which considerably reduces the number of input simplices to consider.
We also introduce a stratification approach to the problem, that we call "sandwiching"
- Score: 8.648433479399857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces an efficient algorithm for persistence diagram
computation, given an input piecewise linear scalar field f defined on a
d-dimensional simplicial complex K, with $d \leq 3$. Our method extends the
seminal "PairCells" algorithm by introducing three main accelerations. First,
we express this algorithm within the setting of discrete Morse theory, which
considerably reduces the number of input simplices to consider. Second, we
introduce a stratification approach to the problem, that we call "sandwiching".
Specifically, minima-saddle persistence pairs ($D_0(f)$) and saddle-maximum
persistence pairs ($D_{d-1}(f)$) are efficiently computed by respectively
processing with a Union-Find the unstable sets of 1-saddles and the stable sets
of (d-1)-saddles. This fast processing of the dimensions 0 and (d-1) further
reduces, and drastically, the number of critical simplices to consider for the
computation of $D_1(f)$, the intermediate layer of the sandwich. Third, we
document several performance improvements via shared-memory parallelism. We
provide an open-source implementation of our algorithm for reproducibility
purposes. We also contribute a reproducible benchmark package, which exploits
three-dimensional data from a public repository and compares our algorithm to a
variety of publicly available implementations. Extensive experiments indicate
that our algorithm improves by two orders of magnitude the time performance of
the seminal "PairCells" algorithm it extends. Moreover, it also improves memory
footprint and time performance over a selection of 14 competing approaches,
with a substantial gain over the fastest available approaches, while producing
a strictly identical output. We illustrate the utility of our contributions
with an application to the fast and robust extraction of persistent
1-dimensional generators on surfaces, volume data and high-dimensional point
clouds.
Related papers
- An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - UNETR++: Delving into Efficient and Accurate 3D Medical Image Segmentation [93.88170217725805]
We propose a 3D medical image segmentation approach, named UNETR++, that offers both high-quality segmentation masks as well as efficiency in terms of parameters, compute cost, and inference speed.
The core of our design is the introduction of a novel efficient paired attention (EPA) block that efficiently learns spatial and channel-wise discriminative features.
Our evaluations on five benchmarks, Synapse, BTCV, ACDC, BRaTs, and Decathlon-Lung, reveal the effectiveness of our contributions in terms of both efficiency and accuracy.
arXiv Detail & Related papers (2022-12-08T18:59:57Z) - Asymmetric Scalable Cross-modal Hashing [51.309905690367835]
Cross-modal hashing is a successful method to solve large-scale multimedia retrieval issue.
We propose a novel Asymmetric Scalable Cross-Modal Hashing (ASCMH) to address these issues.
Our ASCMH outperforms the state-of-the-art cross-modal hashing methods in terms of accuracy and efficiency.
arXiv Detail & Related papers (2022-07-26T04:38:47Z) - Rapid Person Re-Identification via Sub-space Consistency Regularization [51.76876061721556]
Person Re-Identification (ReID) matches pedestrians across disjoint cameras.
Existing ReID methods adopting real-value feature descriptors have achieved high accuracy, but they are low in efficiency due to the slow Euclidean distance computation.
We propose a novel Sub-space Consistency Regularization (SCR) algorithm that can speed up the ReID procedure by 0.25$ times.
arXiv Detail & Related papers (2022-07-13T02:44:05Z) - A Fast PC Algorithm with Reversed-order Pruning and A Parallelization
Strategy [22.31288740171446]
The PC algorithm is the state-of-the-art algorithm for causal structure discovery on observational data.
It can be computationally expensive in the worst case due to the conditional independence tests are performed.
This makes the algorithm computationally intractable when the task contains several hundred or thousand nodes.
We propose a critical observation that the conditional set rendering two nodes independent is non-unique, and including certain redundant nodes do not sacrifice result accuracy.
arXiv Detail & Related papers (2021-09-10T02:22:10Z) - RAMA: A Rapid Multicut Algorithm on GPU [23.281726932718232]
We propose a highly parallel primal-dual algorithm for the multicut (a.k.a.magnitude correlation clustering) problem.
Our algorithm produces primal solutions and dual lower bounds that estimate the distance to optimum.
We can solve very large scale benchmark problems with up to $mathcalO(108)$ variables in a few seconds with small primal-dual gaps.
arXiv Detail & Related papers (2021-09-04T10:33:59Z) - Fast Parallel Algorithms for Euclidean Minimum Spanning Tree and
Hierarchical Spatial Clustering [6.4805900740861]
We introduce a new notion of well-separation to reduce the work and space of our algorithm for HDBSCAN$*$.
We show that our algorithms are theoretically efficient: they have work (number of operations) matching their sequential counterparts, and polylogarithmic depth (parallel time)
Our experiments on large real-world and synthetic data sets using a 48-core machine show that our fastest algorithms outperform the best serial algorithms for the problems by 11.13--55.89x, and existing parallel algorithms by at least an order of magnitude.
arXiv Detail & Related papers (2021-04-02T16:05:00Z) - Hybrid Trilinear and Bilinear Programming for Aligning Partially
Overlapping Point Sets [85.71360365315128]
In many applications, we need algorithms which can align partially overlapping point sets are invariant to the corresponding corresponding RPM algorithm.
We first show that the objective is a cubic bound function. We then utilize the convex envelopes of trilinear and bilinear monomial transformations to derive its lower bound.
We next develop a branch-and-bound (BnB) algorithm which only branches over the transformation variables and runs efficiently.
arXiv Detail & Related papers (2021-01-19T04:24:23Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Fast Coherent Point Drift [4.369046007546103]
Coherent point drift (CPD) is a classical method for nonrigid point set registration.
By introducing a simple corresponding constraint, we develop a fast implementation of CPD.
Experimental results in 3D point cloud data show that our method can significantly reduce burden of the registration process.
arXiv Detail & Related papers (2020-06-11T09:35:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.