The Computational Complexity of Concise Hypersphere Classification
- URL: http://arxiv.org/abs/2312.07103v1
- Date: Tue, 12 Dec 2023 09:33:03 GMT
- Title: The Computational Complexity of Concise Hypersphere Classification
- Authors: Eduard Eiben, Robert Ganian, Iyad Kanj, Sebastian Ordyniak, Stefan
Szeider
- Abstract summary: This paper is the first complexity-theoretic study of the hypersphere classification problem for binary data.
We use the fine-grained parameterized complexity paradigm to analyze the impact of structural properties that may be present in the input data.
- Score: 49.57441416941195
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hypersphere classification is a classical and foundational method that can
provide easy-to-process explanations for the classification of real-valued and
binary data. However, obtaining an (ideally concise) explanation via
hypersphere classification is much more difficult when dealing with binary data
than real-valued data. In this paper, we perform the first complexity-theoretic
study of the hypersphere classification problem for binary data. We use the
fine-grained parameterized complexity paradigm to analyze the impact of
structural properties that may be present in the input data as well as
potential conciseness constraints. Our results include stronger lower bounds
and new fixed-parameter algorithms for hypersphere classification of binary
data, which can find an exact and concise explanation when one exists.
Related papers
- A Perceptron-based Fine Approximation Technique for Linear Separation [0.0]
This paper presents a novel online learning method that aims at finding a separator hyperplane between data points labelled as either positive or negative.
Weights and biases of artificial neurons can directly be related to hyperplanes in high-dimensional spaces.
The presented method has proven converge; empirical results show that it can be more efficient than the Perceptron algorithm.
arXiv Detail & Related papers (2023-09-12T08:35:24Z) - DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained
Diffusion [66.21290235237808]
We introduce an energy constrained diffusion model which encodes a batch of instances from a dataset into evolutionary states.
We provide rigorous theory that implies closed-form optimal estimates for the pairwise diffusion strength among arbitrary instance pairs.
Experiments highlight the wide applicability of our model as a general-purpose encoder backbone with superior performance in various tasks.
arXiv Detail & Related papers (2023-01-23T15:18:54Z) - Provably Accurate and Scalable Linear Classifiers in Hyperbolic Spaces [39.71927912296049]
We propose a unified framework for learning scalable and simple hyperbolic linear classifiers.
The gist of our approach is to focus on Poincar'e ball models and formulate the classification problems using tangent space formalisms.
The excellent performance of the Poincar'e second-order and strategic perceptrons shows that the proposed framework can be extended to general machine learning problems in hyperbolic spaces.
arXiv Detail & Related papers (2022-03-07T21:36:21Z) - Riemannian classification of EEG signals with missing values [67.90148548467762]
This paper proposes two strategies to handle missing data for the classification of electroencephalograms.
The first approach estimates the covariance from imputed data with the $k$-nearest neighbors algorithm; the second relies on the observed data by leveraging the observed-data likelihood within an expectation-maximization algorithm.
As results show, the proposed strategies perform better than the classification based on observed data and allow to keep a high accuracy even when the missing data ratio increases.
arXiv Detail & Related papers (2021-10-19T14:24:50Z) - Information-Theoretic Generalization Bounds for Iterative
Semi-Supervised Learning [81.1071978288003]
In particular, we seek to understand the behaviour of the em generalization error of iterative SSL algorithms using information-theoretic principles.
Our theoretical results suggest that when the class conditional variances are not too large, the upper bound on the generalization error decreases monotonically with the number of iterations, but quickly saturates.
arXiv Detail & Related papers (2021-10-03T05:38:49Z) - Highly Scalable and Provably Accurate Classification in Poincare Balls [40.82908295137667]
We establish a unified framework for learning scalable and simple hyperbolic linear classifiers with provable performance guarantees.
Our results include a new hyperbolic and second-order perceptron algorithm as well as an efficient and highly accurate convex optimization setup for hyperbolic support vector machine classifiers.
Their performance accuracies on synthetic data sets comprising millions of points, as well as on complex real-world data sets such as single-cell RNA-seq expression measurements, CIFAR10, Fashion-MNIST and mini-ImageNet.
arXiv Detail & Related papers (2021-09-08T16:59:39Z) - Classify and Generate Reciprocally: Simultaneous Positive-Unlabelled
Learning and Conditional Generation with Extra Data [77.31213472792088]
The scarcity of class-labeled data is a ubiquitous bottleneck in many machine learning problems.
We address this problem by leveraging Positive-Unlabeled(PU) classification and the conditional generation with extra unlabeled data.
We present a novel training framework to jointly target both PU classification and conditional generation when exposed to extra data.
arXiv Detail & Related papers (2020-06-14T08:27:40Z) - A tetrachotomy of ontology-mediated queries with a covering axiom [1.749935196721634]
Our concern is the problem of efficiently determining the data complexity of answering queries mediated by description and their optimal rewritings to standard database queries.
We focus on Boolean conjunctive-mediated queries called disjunctive sirups (or d-sirups)
Some d-sirups only have exponential-size resolution features, some only double-exponential-size positive existential existential-rewritings and single-exprecursive datalog rewritings.
arXiv Detail & Related papers (2020-06-07T14:47:07Z) - New advances in enumerative biclustering algorithms with online
partitioning [80.22629846165306]
This paper further extends RIn-Close_CVC, a biclustering algorithm capable of performing an efficient, complete, correct and non-redundant enumeration of maximal biclusters with constant values on columns in numerical datasets.
The improved algorithm is called RIn-Close_CVC3, keeps those attractive properties of RIn-Close_CVC, and is characterized by: a drastic reduction in memory usage; a consistent gain in runtime.
arXiv Detail & Related papers (2020-03-07T14:54:26Z) - Data Structures & Algorithms for Exact Inference in Hierarchical
Clustering [41.24805506595378]
We present novel dynamic-programming algorithms for emphexact inference in hierarchical clustering based on a novel trellis data structure.
Our algorithms scale in time and space proportional to the powerset of $N$ elements which is super-exponentially more efficient than explicitly considering each of the (2N-3)!! possible hierarchies.
arXiv Detail & Related papers (2020-02-26T17:43:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.