Is an Affine Constraint Needed for Affine Subspace Clustering?
- URL: http://arxiv.org/abs/2005.03888v1
- Date: Fri, 8 May 2020 07:52:17 GMT
- Title: Is an Affine Constraint Needed for Affine Subspace Clustering?
- Authors: Chong You and Chun-Guang Li and Daniel P. Robinson and Rene Vidal
- Abstract summary: In computer vision applications, the subspaces are linear and subspace clustering methods can be applied directly.
In motion segmentation, the subspaces are affine and an additional affine constraint on the coefficients is often enforced.
This paper shows, both theoretically and empirically, that when the dimension of the ambient space is high relative to the sum of the dimensions of the affine subspaces, the affine constraint has a negligible effect on clustering performance.
- Score: 27.00532615975731
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Subspace clustering methods based on expressing each data point as a linear
combination of other data points have achieved great success in computer vision
applications such as motion segmentation, face and digit clustering. In face
clustering, the subspaces are linear and subspace clustering methods can be
applied directly. In motion segmentation, the subspaces are affine and an
additional affine constraint on the coefficients is often enforced. However,
since affine subspaces can always be embedded into linear subspaces of one
extra dimension, it is unclear if the affine constraint is really necessary.
This paper shows, both theoretically and empirically, that when the dimension
of the ambient space is high relative to the sum of the dimensions of the
affine subspaces, the affine constraint has a negligible effect on clustering
performance. Specifically, our analysis provides conditions that guarantee the
correctness of affine subspace clustering methods both with and without the
affine constraint, and shows that these conditions are satisfied for
high-dimensional data. Underlying our analysis is the notion of affinely
independent subspaces, which not only provides geometrically interpretable
correctness conditions, but also clarifies the relationships between existing
results for affine subspace clustering.
Related papers
- Relative intrinsic dimensionality is intrinsic to learning [49.5738281105287]
We introduce a new notion of the intrinsic dimension of a data distribution, which precisely captures the separability properties of the data.
For this intrinsic dimension, the rule of thumb above becomes a law: high intrinsic dimension guarantees highly separable data.
We show thisRelative intrinsic dimension provides both upper and lower bounds on the probability of successfully learning and generalising in a binary classification problem.
arXiv Detail & Related papers (2023-10-10T10:41:45Z) - One-step Bipartite Graph Cut: A Normalized Formulation and Its
Application to Scalable Subspace Clustering [56.81492360414741]
We show how to enforce a one-step normalized cut for bipartite graphs, especially with linear-time complexity.
In this paper, we first characterize a novel one-step bipartite graph cut criterion with normalized constraints, and theoretically prove its equivalence to a trace problem.
We extend this cut criterion to a scalable subspace clustering approach, where adaptive anchor learning, bipartite graph learning, and one-step normalized bipartite graph partitioning are simultaneously modeled.
arXiv Detail & Related papers (2023-05-12T11:27:20Z) - Principal Component Analysis in Space Forms [7.822210329345704]
We study Principal Component Analysis (PCA) in space forms.
Finding the optimal low-dimensional affine subspace for given points in a space form amounts to dimensionality reduction.
We propose proper cost functions that enjoy two properties: (1) their optimal affine subspace is the solution to an eigenequation, and (2) optimal affine subspaces of different dimensions form a nested set.
arXiv Detail & Related papers (2023-01-06T23:48:37Z) - Unsupervised Manifold Linearizing and Clustering [19.879641608165887]
We propose to optimize the Maximal Coding Reduction metric with respect to both the data representation and a novel doubly cluster membership.
Experiments on CIFAR-10, -20, -100, and TinyImageNet-200 datasets show that the proposed method is much more accurate and scalable than state-of-the-art deep clustering methods.
arXiv Detail & Related papers (2023-01-04T20:08:23Z) - Continuous percolation in a Hilbert space for a large system of qubits [58.720142291102135]
The percolation transition is defined through the appearance of the infinite cluster.
We show that the exponentially increasing dimensionality of the Hilbert space makes its covering by finite-size hyperspheres inefficient.
Our approach to the percolation transition in compact metric spaces may prove useful for its rigorous treatment in other contexts.
arXiv Detail & Related papers (2022-10-15T13:53:21Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Linear Classifiers in Mixed Constant Curvature Spaces [40.82908295137667]
We address the problem of linear classification in a product space form -- a mix of Euclidean, spherical, and hyperbolic spaces.
We prove that linear classifiers in $d$-dimensional constant curvature spaces can shatter exactly $d+1$ points.
We describe a novel perceptron classification algorithm, and establish rigorous convergence results.
arXiv Detail & Related papers (2021-02-19T23:29:03Z) - Switch Spaces: Learning Product Spaces with Sparse Gating [48.591045282317424]
We propose Switch Spaces, a data-driven approach for learning representations in product space.
We introduce sparse gating mechanisms that learn to choose, combine and switch spaces.
Experiments on knowledge graph completion and item recommendations show that the proposed switch space achieves new state-of-the-art performances.
arXiv Detail & Related papers (2021-02-17T11:06:59Z) - A Critique of Self-Expressive Deep Subspace Clustering [23.971512395191308]
Subspace clustering is an unsupervised clustering technique designed to cluster data that is supported on a union of linear subspaces.
We show that there are a number of potential flaws with this approach which have not been adequately addressed in prior work.
arXiv Detail & Related papers (2020-10-08T00:14:59Z) - Joint and Progressive Subspace Analysis (JPSA) with Spatial-Spectral
Manifold Alignment for Semi-Supervised Hyperspectral Dimensionality Reduction [48.73525876467408]
We propose a novel technique for hyperspectral subspace analysis.
The technique is called joint and progressive subspace analysis (JPSA)
Experiments are conducted to demonstrate the superiority and effectiveness of the proposed JPSA on two widely-used hyperspectral datasets.
arXiv Detail & Related papers (2020-09-21T16:29:59Z) - Stochastic Sparse Subspace Clustering [20.30051592270384]
State-of-the-art subspace clustering methods are based on self-expressive model, which represents each data point as a linear combination of other data points.
We introduce dropout to address the issue of over-segmentation, which is based on randomly dropping out data points.
This leads to a scalable and flexible sparse subspace clustering approach, termed Sparse Subspace Clustering.
arXiv Detail & Related papers (2020-05-04T13:09:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.