RipsNet: a general architecture for fast and robust estimation of the
persistent homology of point clouds
- URL: http://arxiv.org/abs/2202.01725v2
- Date: Fri, 4 Feb 2022 11:23:37 GMT
- Title: RipsNet: a general architecture for fast and robust estimation of the
persistent homology of point clouds
- Authors: Thibault de Surrel, Felix Hensel, Mathieu Carri\`ere, Th\'eo Lacombe,
Yuichi Ike, Hiroaki Kurihara, Marc Glisse, Fr\'ed\'eric Chazal
- Abstract summary: We show that RipsNet can estimate topological descriptors on test data very efficiently with generalization capacity.
We prove that RipsNet is robust to input perturbations in terms of the 1-Wasserstein distance.
We showcase the use of RipsNet on both synthetic and real-world data.
- Score: 4.236277880658203
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The use of topological descriptors in modern machine learning applications,
such as Persistence Diagrams (PDs) arising from Topological Data Analysis
(TDA), has shown great potential in various domains. However, their practical
use in applications is often hindered by two major limitations: the
computational complexity required to compute such descriptors exactly, and
their sensitivity to even low-level proportions of outliers. In this work, we
propose to bypass these two burdens in a data-driven setting by entrusting the
estimation of (vectorization of) PDs built on top of point clouds to a neural
network architecture that we call RipsNet. Once trained on a given data set,
RipsNet can estimate topological descriptors on test data very efficiently with
generalization capacity. Furthermore, we prove that RipsNet is robust to input
perturbations in terms of the 1-Wasserstein distance, a major improvement over
the standard computation of PDs that only enjoys Hausdorff stability, yielding
RipsNet to substantially outperform exactly-computed PDs in noisy settings. We
showcase the use of RipsNet on both synthetic and real-world data. Our
open-source implementation is publicly available at
https://github.com/hensel-f/ripsnet and will be included in the Gudhi library.
Related papers
- Nes2Net: A Lightweight Nested Architecture for Foundation Model Driven Speech Anti-spoofing [56.53218228501566]
Nested Res2Net (Nes2Net) is a lightweight back-end architecture designed to directly process high-dimensional features without DR layers.
We report a 22% performance improvement and an 87% back-end computational cost reduction over the state-of-the-art baseline.
arXiv Detail & Related papers (2025-04-08T04:11:28Z) - TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud
Representation [65.4396959244269]
The paper tackles the challenge by designing a general framework to construct 3D learning architectures.
The proposed approach can be applied to general backbones like PointNet and DGCNN.
Experiments on ModelNet40, ShapeNet, and the real-world dataset ScanObjectNN, demonstrated that the method achieves a great trade-off between efficiency, rotation, and accuracy.
arXiv Detail & Related papers (2022-09-13T12:12:19Z) - Training Robust Deep Models for Time-Series Domain: Novel Algorithms and
Theoretical Analysis [32.45387153404849]
We propose a novel framework referred as RObust Training for Time-Series (RO-TS) to create robust DNNs for time-series classification tasks.
We show the generality and advantages of our formulation using the summation structure over time-series alignments.
Our experiments on real-world benchmarks demonstrate that RO-TS creates more robust DNNs when compared to adversarial training.
arXiv Detail & Related papers (2022-07-09T17:21:03Z) - Smooth densities and generative modeling with unsupervised random
forests [1.433758865948252]
An important application for density estimators is synthetic data generation.
We propose a new method based on unsupervised random forests for estimating smooth densities in arbitrary dimensions without parametric constraints.
We prove the consistency of our approach and demonstrate its advantages over existing tree-based density estimators.
arXiv Detail & Related papers (2022-05-19T09:50:25Z) - RED++ : Data-Free Pruning of Deep Neural Networks via Input Splitting
and Output Merging [36.027765880474526]
Pruning Deep Neural Networks (DNNs) is a prominent field of study in the goal of inference runtime acceleration.
We introduce a novel data-free pruning protocol RED++.
We study the theoretical and empirical guarantees on the preservation of the accuracy from the hashing.
arXiv Detail & Related papers (2021-09-30T09:31:11Z) - Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch [75.69506249886622]
Sparsity in Deep Neural Networks (DNNs) has been widely studied to compress and accelerate the models on resource-constrained environments.
In this paper, we are the first to study training from scratch an N:M fine-grained structured sparse network.
arXiv Detail & Related papers (2021-02-08T05:55:47Z) - Neural Pruning via Growing Regularization [82.9322109208353]
We extend regularization to tackle two central problems of pruning: pruning schedule and weight importance scoring.
Specifically, we propose an L2 regularization variant with rising penalty factors and show it can bring significant accuracy gains.
The proposed algorithms are easy to implement and scalable to large datasets and networks in both structured and unstructured pruning.
arXiv Detail & Related papers (2020-12-16T20:16:28Z) - Statistical Mechanical Analysis of Neural Network Pruning [6.029526715675584]
We show that DPP based node pruning method is notably superior to competing approaches when tested on real datasets.
We use our theoretical setup to prove this finding and show that even the baseline random edge pruning method performs better than the DPP node pruning method.
arXiv Detail & Related papers (2020-06-30T09:15:25Z) - Boundary-assisted Region Proposal Networks for Nucleus Segmentation [89.69059532088129]
Machine learning models cannot perform well because of large amount of crowded nuclei.
We devise a Boundary-assisted Region Proposal Network (BRP-Net) that achieves robust instance-level nucleus segmentation.
arXiv Detail & Related papers (2020-06-04T08:26:38Z) - A Model-driven Deep Neural Network for Single Image Rain Removal [52.787356046951494]
We propose a model-driven deep neural network for the task, with fully interpretable network structures.
Based on the convolutional dictionary learning mechanism for representing rain, we propose a novel single image deraining model.
All the rain kernels and operators can be automatically extracted, faithfully characterizing the features of both rain and clean background layers.
arXiv Detail & Related papers (2020-05-04T09:13:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.