A Practical Solver for Scalar Data Topological Simplification
- URL: http://arxiv.org/abs/2407.12399v3
- Date: Tue, 20 Aug 2024 21:27:00 GMT
- Title: A Practical Solver for Scalar Data Topological Simplification
- Authors: Mohamed Kissi, Mathieu Pont, Joshua A. Levine, Julien Tierny,
- Abstract summary: This paper presents a practical approach for the optimization of topological simplification.
We show that our approach leads to improvements over standard topological techniques for removing filament loops.
We also show how our approach can be used to repair genus defects in surface processing.
- Score: 7.079737824450954
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a practical approach for the optimization of topological simplification, a central pre-processing step for the analysis and visualization of scalar data. Given an input scalar field f and a set of "signal" persistence pairs to maintain, our approach produces an output field g that is close to f and which optimizes (i) the cancellation of "non-signal" pairs, while (ii) preserving the "signal" pairs. In contrast to pre-existing simplification algorithms, our approach is not restricted to persistence pairs involving extrema and can thus address a larger class of topological features, in particular saddle pairs in three-dimensional scalar data. Our approach leverages recent generic persistence optimization frameworks and extends them with tailored accelerations specific to the problem of topological simplification. Extensive experiments report substantial accelerations over these frameworks, thereby making topological simplification optimization practical for real-life datasets. Our approach enables a direct visualization and analysis of the topologically simplified data, e.g., via isosurfaces of simplified topology (fewer components and handles). We apply our approach to the extraction of prominent filament structures in three-dimensional data. Specifically, we show that our pre-simplification of the data leads to practical improvements over standard topological techniques for removing filament loops. We also show how our approach can be used to repair genus defects in surface processing. Finally, we provide a C++ implementation for reproducibility purposes.
Related papers
- Diffeomorphic interpolation for efficient persistence-based topological optimization [3.7550827441501844]
Topological Data Analysis (TDA) provides a pipeline to extract quantitative topological descriptors from structured objects.
We show that our approach combines efficiently with subsampling techniques, as the diffeomorphism derived from the gradient computed on a subsample can be used to update the coordinates of the full input object.
We also showcase the relevance of our approach for black-box autoencoder (AE) regularization, where we aim at enforcing topological priors on the latent spaces associated to fixed, pre-trained, blackbox AE models.
arXiv Detail & Related papers (2024-05-29T07:00:28Z) - Functional Graphical Models: Structure Enables Offline Data-Driven Optimization [111.28605744661638]
We show how structure can enable sample-efficient data-driven optimization.
We also present a data-driven optimization algorithm that infers the FGM structure itself.
arXiv Detail & Related papers (2024-01-08T22:33:14Z) - Discrete transforms of quantized persistence diagrams [0.5249805590164902]
We introduce Qupid, a novel and simple method for vectorizing persistence diagrams.
Key features are the choice of log-scaled grids that emphasize information contained near the diagonal in persistence diagrams.
We conduct an in-depth experimental analysis of Qupid, showing that the simplicity of our method results in very low computational costs.
arXiv Detail & Related papers (2023-12-28T16:11:11Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - Topology-Preserving Dimensionality Reduction via Interleaving
Optimization [10.097180927318703]
We show how optimization seeking to minimize the interleaving distance can be incorporated into dimensionality reduction algorithms.
We demonstrate the utility of this framework to data visualization.
arXiv Detail & Related papers (2022-01-31T06:11:17Z) - DAGs with No Curl: An Efficient DAG Structure Learning Approach [62.885572432958504]
Recently directed acyclic graph (DAG) structure learning is formulated as a constrained continuous optimization problem with continuous acyclicity constraints.
We propose a novel learning framework to model and learn the weighted adjacency matrices in the DAG space directly.
We show that our method provides comparable accuracy but better efficiency than baseline DAG structure learning methods on both linear and generalized structural equation models.
arXiv Detail & Related papers (2021-06-14T07:11:36Z) - A Fast and Robust Method for Global Topological Functional Optimization [70.11080854486953]
We introduce a novel backpropagation scheme that is significantly faster, more stable, and produces more robust optima.
This scheme can also be used to produce a stable visualization of dots in a persistence diagram as a distribution over critical, and near-critical, simplices in the data structure.
arXiv Detail & Related papers (2020-09-17T18:46:16Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z) - Hierarchical regularization networks for sparsification based learning
on noisy datasets [0.0]
hierarchy follows from approximation spaces identified at successively finer scales.
For promoting model generalization at each scale, we also introduce a novel, projection based penalty operator across multiple dimension.
Results show the performance of the approach as a data reduction and modeling strategy on both synthetic and real datasets.
arXiv Detail & Related papers (2020-06-09T18:32:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.