On the Convergence of Optimizing Persistent-Homology-Based Losses
- URL: http://arxiv.org/abs/2206.02946v1
- Date: Mon, 6 Jun 2022 23:22:37 GMT
- Title: On the Convergence of Optimizing Persistent-Homology-Based Losses
- Authors: Yikai Zhang, Jiachen Yao, Yusu Wang, Chao Chen
- Abstract summary: A topological loss enforces the model to achieve certain desired topological property.
We introduce a general purpose regularized topology-aware loss.
These contributions lead to a new loss function that not only enforces the model to have desired topological behavior, but also achieves satisfying convergence behavior.
- Score: 16.308134813298867
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Topological loss based on persistent homology has shown promise in various
applications. A topological loss enforces the model to achieve certain desired
topological property. Despite its empirical success, less is known about the
optimization behavior of the loss. In fact, the topological loss involves
combinatorial configurations that may oscillate during optimization. In this
paper, we introduce a general purpose regularized topology-aware loss. We
propose a novel regularization term and also modify existing topological loss.
These contributions lead to a new loss function that not only enforces the
model to have desired topological behavior, but also achieves satisfying
convergence behavior. Our main theoretical result guarantees that the loss can
be optimized efficiently, under mild assumptions.
Related papers
- A surrogate model for topology optimisation of elastic structures via parametric autoencoders [0.0]
Instead of learning the parametric solution of the state (and adjoint) problems, the proposed approach devises a surrogate version of the entire optimisation pipeline.<n>The method predicts a quasi-optimal topology for a given problem configuration as a surrogate model of high-fidelity topologies optimised with the homogenisation method.<n>Different architectures are proposed and the approximation and generalisation capabilities of the resulting models are numerically evaluated.
arXiv Detail & Related papers (2025-07-30T10:07:42Z) - ProPINN: Demystifying Propagation Failures in Physics-Informed Neural Networks [71.02216400133858]
Physics-informed neural networks (PINNs) have earned high expectations in solving partial differential equations (PDEs)
Previous research observed the propagation failure phenomenon of PINNs.
This paper provides the first formal and in-depth study of propagation failure and its root cause.
arXiv Detail & Related papers (2025-02-02T13:56:38Z) - STITCH: Surface reconstrucTion using Implicit neural representations with Topology Constraints and persistent Homology [23.70495314317551]
We present STITCH, a novel approach for neural implicit surface reconstruction of a sparse and irregularly spaced point cloud.
We develop a new differentiable framework based on persistent homology to formulate topological loss terms that enforce the prior of a single 2-manifold object.
arXiv Detail & Related papers (2024-12-24T22:55:35Z) - Topograph: An efficient Graph-Based Framework for Strictly Topology Preserving Image Segmentation [78.54656076915565]
Topological correctness plays a critical role in many image segmentation tasks.
Most networks are trained using pixel-wise loss functions, such as Dice, neglecting topological accuracy.
We propose a novel, graph-based framework for topologically accurate image segmentation.
arXiv Detail & Related papers (2024-11-05T16:20:14Z) - Diffeomorphic interpolation for efficient persistence-based topological optimization [3.7550827441501844]
Topological Data Analysis (TDA) provides a pipeline to extract quantitative topological descriptors from structured objects.
We show that our approach combines efficiently with subsampling techniques, as the diffeomorphism derived from the gradient computed on a subsample can be used to update the coordinates of the full input object.
We also showcase the relevance of our approach for black-box autoencoder (AE) regularization, where we aim at enforcing topological priors on the latent spaces associated to fixed, pre-trained, blackbox AE models.
arXiv Detail & Related papers (2024-05-29T07:00:28Z) - On the Dynamics Under the Unhinged Loss and Beyond [104.49565602940699]
We introduce the unhinged loss, a concise loss function, that offers more mathematical opportunities to analyze closed-form dynamics.
The unhinged loss allows for considering more practical techniques, such as time-vary learning rates and feature normalization.
arXiv Detail & Related papers (2023-12-13T02:11:07Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - Implicit Bias of Gradient Descent for Logistic Regression at the Edge of
Stability [69.01076284478151]
In machine learning optimization, gradient descent (GD) often operates at the edge of stability (EoS)
This paper studies the convergence and implicit bias of constant-stepsize GD for logistic regression on linearly separable data in the EoS regime.
arXiv Detail & Related papers (2023-05-19T16:24:47Z) - Cross-Entropy Loss Functions: Theoretical Analysis and Applications [27.3569897539488]
We present a theoretical analysis of a broad family of loss functions, that includes cross-entropy (or logistic loss), generalized cross-entropy, the mean absolute error and other cross-entropy-like loss functions.
We show that these loss functions are beneficial in the adversarial setting by proving that they admit $H$-consistency bounds.
This leads to new adversarial robustness algorithms that consist of minimizing a regularized smooth adversarial comp-sum loss.
arXiv Detail & Related papers (2023-04-14T17:58:23Z) - Topologically penalized regression on manifolds [0.0]
We study a regression problem on a compact manifold M.
In order to take advantage of the underlying geometry and topology of the data, the regression task is performed on the basis of the first several eigenfunctions of the Laplace-Beltrami operator of the manifold.
The proposed penalties are based on the topology of the sub-level sets of either the eigenfunctions or the estimated function.
arXiv Detail & Related papers (2021-10-26T14:59:13Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Risk Guarantees for End-to-End Prediction and Optimization Processes [0.0]
We study conditions that allow us to explicitly describe how the prediction performance governs the optimization performance.
We derive the exact theoretical relationship between prediction performance measured with the squared loss, as well as a class of symmetric loss functions, and the subsequent optimization performance.
arXiv Detail & Related papers (2020-12-30T05:20:26Z) - On dissipative symplectic integration with applications to
gradient-based optimization [77.34726150561087]
We propose a geometric framework in which discretizations can be realized systematically.
We show that a generalization of symplectic to nonconservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error.
arXiv Detail & Related papers (2020-04-15T00:36:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.