Geometric Disentanglement by Random Convex Polytopes
- URL: http://arxiv.org/abs/2009.13987v2
- Date: Sat, 13 Feb 2021 07:39:43 GMT
- Title: Geometric Disentanglement by Random Convex Polytopes
- Authors: Michael Joswig, Marek Kaluba, Lukas Ruff
- Abstract summary: We propose a new geometric method for measuring the quality of representations obtained from deep learning.
Our approach, called Random Polytope Descriptor, provides an efficient description of data points based on the construction of random convex polytopes.
- Score: 3.6852491526879683
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new geometric method for measuring the quality of
representations obtained from deep learning. Our approach, called Random
Polytope Descriptor, provides an efficient description of data points based on
the construction of random convex polytopes. We demonstrate the use of our
technique by qualitatively comparing the behavior of classic and regularized
autoencoders. This reveals that applying regularization to autoencoder networks
may decrease the out-of-distribution detection performance in latent space.
While our technique is similar in spirit to $k$-means clustering, we achieve
significantly better false positive/negative balance in clustering tasks on
autoencoded datasets.
Related papers
- Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Rethinking k-means from manifold learning perspective [122.38667613245151]
We present a new clustering algorithm which directly detects clusters of data without mean estimation.
Specifically, we construct distance matrix between data points by Butterworth filter.
To well exploit the complementary information embedded in different views, we leverage the tensor Schatten p-norm regularization.
arXiv Detail & Related papers (2023-05-12T03:01:41Z) - Robust convex biclustering with a tuning-free method [10.603857319905936]
We propose a robust version of convex biclustering algorithm with Huber loss.
The newly introduced robustification parameter brings an extra burden to selecting the optimal parameters.
A real-life biomedical application is also presented.
arXiv Detail & Related papers (2022-12-06T16:37:11Z) - SCR: Smooth Contour Regression with Geometric Priors [10.141085397402314]
SCR is a method that captures resolution-free object contours as complex periodic functions.
We benchmark SCR on the popular COCO 2017 instance segmentation dataset, and show its competitiveness against existing algorithms.
We also design a compact version of our network, which we benchmark on embedded hardware with a wide range of power targets.
arXiv Detail & Related papers (2022-02-08T11:07:51Z) - Interpretable Clustering via Multi-Polytope Machines [12.69310440882225]
We propose a novel approach for interpretable clustering that both clusters data points and constructs polytopes around the discovered clusters to explain them.
We benchmark our approach on a suite of synthetic and real world clustering problems, where our algorithm outperforms state of the art interpretable and non-interpretable clustering algorithms.
arXiv Detail & Related papers (2021-12-10T16:36:32Z) - PU-Flow: a Point Cloud Upsampling Networkwith Normalizing Flows [58.96306192736593]
We present PU-Flow, which incorporates normalizing flows and feature techniques to produce dense points uniformly distributed on the underlying surface.
Specifically, we formulate the upsampling process as point in a latent space, where the weights are adaptively learned from local geometric context.
We show that our method outperforms state-of-the-art deep learning-based approaches in terms of reconstruction quality, proximity-to-surface accuracy, and computation efficiency.
arXiv Detail & Related papers (2021-07-13T07:45:48Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - Neural Subdivision [58.97214948753937]
This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
arXiv Detail & Related papers (2020-05-04T20:03:21Z) - Spatially Adaptive Inference with Stochastic Feature Sampling and
Interpolation [72.40827239394565]
We propose to compute features only at sparsely sampled locations.
We then densely reconstruct the feature map with an efficient procedure.
The presented network is experimentally shown to save substantial computation while maintaining accuracy over a variety of computer vision tasks.
arXiv Detail & Related papers (2020-03-19T15:36:31Z) - Device Heterogeneity in Federated Learning: A Superquantile Approach [0.0]
We propose a framework to handle heterogeneous client devices which do not conform to the population data distribution.
We present an optimization algorithm and establish its convergence to a stationary point.
We conclude with numerical experiments on neural networks as well as linear models on tasks from computer vision and natural language processing.
arXiv Detail & Related papers (2020-02-25T23:37:35Z) - Learning Flat Latent Manifolds with VAEs [16.725880610265378]
We propose an extension to the framework of variational auto-encoders, where the Euclidean metric is a proxy for the similarity between data points.
We replace the compact prior typically used in variational auto-encoders with a recently presented, more expressive hierarchical one.
We evaluate our method on a range of data-sets, including a video-tracking benchmark.
arXiv Detail & Related papers (2020-02-12T09:54:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.