PLLay: Efficient Topological Layer based on Persistence Landscapes
- URL: http://arxiv.org/abs/2002.02778v4
- Date: Mon, 18 Jan 2021 00:44:49 GMT
- Title: PLLay: Efficient Topological Layer based on Persistence Landscapes
- Authors: Kwangho Kim, Jisu Kim, Manzil Zaheer, Joon Sik Kim, Frederic Chazal,
and Larry Wasserman
- Abstract summary: PLLay is a novel topological layer for general deep learning models based on persistence landscapes.
We show differentiability with respect to layer inputs, for a general persistent homology with arbitrary filtration.
- Score: 24.222495922671442
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose PLLay, a novel topological layer for general deep learning models
based on persistence landscapes, in which we can efficiently exploit the
underlying topological features of the input data structure. In this work, we
show differentiability with respect to layer inputs, for a general persistent
homology with arbitrary filtration. Thus, our proposed layer can be placed
anywhere in the network and feed critical information on the topological
features of input data into subsequent layers to improve the learnability of
the networks toward a given task. A task-optimal structure of PLLay is learned
during training via backpropagation, without requiring any input featurization
or data preprocessing. We provide a novel adaptation for the DTM function-based
filtration, and show that the proposed layer is robust against noise and
outliers through a stability analysis. We demonstrate the effectiveness of our
approach by classification experiments on various datasets.
Related papers
- Image Classification using Combination of Topological Features and
Neural Networks [1.0323063834827417]
We use the persistent homology method, a technique in topological data analysis (TDA), to extract essential topological features from the data space.
This was carried out with the aim of classifying images from multiple classes in the MNIST dataset.
Our approach inserts topological features into deep learning approaches composed by single and two-streams neural networks.
arXiv Detail & Related papers (2023-11-10T20:05:40Z) - Understanding Deep Representation Learning via Layerwise Feature
Compression and Discrimination [33.273226655730326]
We show that each layer of a deep linear network progressively compresses within-class features at a geometric rate and discriminates between-class features at a linear rate.
This is the first quantitative characterization of feature evolution in hierarchical representations of deep linear networks.
arXiv Detail & Related papers (2023-11-06T09:00:38Z) - Learning the Right Layers: a Data-Driven Layer-Aggregation Strategy for
Semi-Supervised Learning on Multilayer Graphs [2.752817022620644]
Clustering (or community detection) on multilayer graphs poses several additional complications.
One of the major challenges is to establish the extent to which each layer contributes to the cluster iteration assignment.
We propose a parameter-free Laplacian-regularized model that learns an optimal nonlinear combination of the different layers from the available input labels.
arXiv Detail & Related papers (2023-05-31T19:50:11Z) - Rethinking Persistent Homology for Visual Recognition [27.625893409863295]
This paper performs a detailed analysis of the effectiveness of topological properties for image classification in various training scenarios.
We identify the scenarios that benefit the most from topological features, e.g., training simple networks on small datasets.
arXiv Detail & Related papers (2022-07-09T08:01:11Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - Imposing Consistency for Optical Flow Estimation [73.53204596544472]
Imposing consistency through proxy tasks has been shown to enhance data-driven learning.
This paper introduces novel and effective consistency strategies for optical flow estimation.
arXiv Detail & Related papers (2022-04-14T22:58:30Z) - Activation Landscapes as a Topological Summary of Neural Network
Performance [0.0]
We study how data transforms as it passes through successive layers of a deep neural network (DNN)
We compute the persistent homology of the activation data for each layer of the network and summarize this information using persistence landscapes.
The resulting feature map provides both an informative visual- ization of the network and a kernel for statistical analysis and machine learning.
arXiv Detail & Related papers (2021-10-19T17:45:36Z) - Localized Persistent Homologies for more Effective Deep Learning [60.78456721890412]
We introduce an approach that relies on a new filtration function to account for location during network training.
We demonstrate experimentally on 2D images of roads and 3D image stacks of neuronal processes that networks trained in this manner are better at recovering the topology of the curvilinear structures they extract.
arXiv Detail & Related papers (2021-10-12T19:28:39Z) - Understanding and Diagnosing Vulnerability under Adversarial Attacks [62.661498155101654]
Deep Neural Networks (DNNs) are known to be vulnerable to adversarial attacks.
We propose a novel interpretability method, InterpretGAN, to generate explanations for features used for classification in latent variables.
We also design the first diagnostic method to quantify the vulnerability contributed by each layer.
arXiv Detail & Related papers (2020-07-17T01:56:28Z) - A Trainable Optimal Transport Embedding for Feature Aggregation and its
Relationship to Attention [96.77554122595578]
We introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference.
Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost.
arXiv Detail & Related papers (2020-06-22T08:35:58Z) - Global Context-Aware Progressive Aggregation Network for Salient Object
Detection [117.943116761278]
We propose a novel network named GCPANet to integrate low-level appearance features, high-level semantic features, and global context features.
We show that the proposed approach outperforms the state-of-the-art methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2020-03-02T04:26:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.