visClust: A visual clustering algorithm based on orthogonal projections
- URL: http://arxiv.org/abs/2211.03894v3
- Date: Thu, 7 Dec 2023 12:18:20 GMT
- Title: visClust: A visual clustering algorithm based on orthogonal projections
- Authors: Anna Breger, Clemens Karner, Martin Ehler
- Abstract summary: visClust is a novel clustering algorithm based on lower dimensional data representations and visual interpretation.
The code is made available on GitHub and straightforward to use.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel clustering algorithm, visClust, that is based on lower
dimensional data representations and visual interpretation. Thereto, we design
a transformation that allows the data to be represented by a binary integer
array enabling the use of image processing methods to select a partition.
Qualitative and quantitative analyses measured in accuracy and an adjusted
Rand-Index show that the algorithm performs well while requiring low runtime
and RAM. We compare the results to 6 state-of-the-art algorithms with available
code, confirming the quality of visClust by superior performance in most
experiments. Moreover, the algorithm asks for just one obligatory input
parameter while allowing optimization via optional parameters. The code is made
available on GitHub and straightforward to use.
Related papers
- k-fold Subsampling based Sequential Backward Feature Elimination [11.640238391159118]
This algorithm is a hybrid feature selection approach combining the benefits of filter and wrapper methods.
It can improve the detection speed of the SVM classifier by over 50% with up to 2% better detection accuracy.
Our algorithm also outperforms the equivalent systems introduced in the deformable part model approach with around 9% improvement in the detection accuracy.
arXiv Detail & Related papers (2025-03-14T23:10:08Z) - Efficient and Accurate Optimal Transport with Mirror Descent and
Conjugate Gradients [15.128885770407132]
We design a novel algorithm for optimal transport by drawing from the entropic optimal transport, mirror descent and conjugate gradients literatures.
Our scalable and GPU parallelizable algorithm is able to compute the Wasserstein distance with extreme precision, reaching relative error rates of $10-8$ without numerical stability issues.
arXiv Detail & Related papers (2023-07-17T14:09:43Z) - Learning the Positions in CountSketch [49.57951567374372]
We consider sketching algorithms which first compress data by multiplication with a random sketch matrix, and then apply the sketch to quickly solve an optimization problem.
In this work, we propose the first learning-based algorithms that also optimize the locations of the non-zero entries.
arXiv Detail & Related papers (2023-06-11T07:28:35Z) - Sub-Image Histogram Equalization using Coot Optimization Algorithm for
Segmentation and Parameter Selection [0.0]
Mean and variance based sub-image histogram equalization (MVSIHE) algorithm is one of these contrast enhancements methods proposed in the literature.
In this study, we employed one of the most recent optimization algorithms, namely, coot optimization algorithm (COA) for selecting appropriate parameters for the MVSIHE algorithm.
The results show that the proposed method can be used in the field of biomedical image processing.
arXiv Detail & Related papers (2022-05-31T06:51:45Z) - Accelerating ERM for data-driven algorithm design using output-sensitive techniques [26.32088674030797]
We study techniques to develop efficient learning algorithms for data-driven algorithm design.
Our approach involves two novel ingredients -- an output-sensitive algorithm for enumerating polytopes induced by a set of hyperplanes.
We illustrate our techniques by giving algorithms for pricing problems, linkage-based clustering and dynamic-programming based sequence alignment.
arXiv Detail & Related papers (2022-04-07T17:27:18Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Estimating leverage scores via rank revealing methods and randomization [50.591267188664666]
We study algorithms for estimating the statistical leverage scores of rectangular dense or sparse matrices of arbitrary rank.
Our approach is based on combining rank revealing methods with compositions of dense and sparse randomized dimensionality reduction transforms.
arXiv Detail & Related papers (2021-05-23T19:21:55Z) - Slowly Varying Regression under Sparsity [5.22980614912553]
We present the framework of slowly hyper regression under sparsity, allowing regression models to exhibit slow and sparse variations.
We suggest a procedure that reformulates as a binary convex algorithm.
We show that the resulting model outperforms competing formulations in comparable times across various datasets.
arXiv Detail & Related papers (2021-02-22T04:51:44Z) - Learned Block Iterative Shrinkage Thresholding Algorithm for
Photothermal Super Resolution Imaging [52.42007686600479]
We propose a learned block-sparse optimization approach using an iterative algorithm unfolded into a deep neural network.
We show the benefits of using a learned block iterative shrinkage thresholding algorithm that is able to learn the choice of regularization parameters.
arXiv Detail & Related papers (2020-12-07T09:27:16Z) - Learning the Positions in CountSketch [51.15935547615698]
We consider sketching algorithms which first compress data by multiplication with a random sketch matrix, and then apply the sketch to quickly solve an optimization problem.
In this work we propose the first learning algorithm that also optimize the locations of the non-zero entries.
We show this algorithm gives better accuracy for low rank approximation than previous work, and apply it to other problems such as $k$-means clustering for the first time.
arXiv Detail & Related papers (2020-07-20T05:06:29Z) - Image Matching across Wide Baselines: From Paper to Practice [80.9424750998559]
We introduce a comprehensive benchmark for local features and robust estimation algorithms.
Our pipeline's modular structure allows easy integration, configuration, and combination of different methods.
We show that with proper settings, classical solutions may still outperform the perceived state of the art.
arXiv Detail & Related papers (2020-03-03T15:20:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.