Scalable Cluster-Consistency Statistics for Robust Multi-Object Matching
- URL: http://arxiv.org/abs/2201.04797v1
- Date: Thu, 13 Jan 2022 05:33:18 GMT
- Title: Scalable Cluster-Consistency Statistics for Robust Multi-Object Matching
- Authors: Yunpeng Shi, Shaohan Li, Tyler Maunu and Gilad Lerman
- Abstract summary: We develop new statistics for robustly filtering corrupted keypoint matches in the structure from motion pipeline.
The statistics are designed to give smaller values to corrupted matches and than uncorrupted matches.
We demonstrate the efficacy of this method on synthetic and real structure from motion datasets and show that it achieves state-of-the-art accuracy and speed in these tasks.
- Score: 16.899237833310064
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop new statistics for robustly filtering corrupted keypoint matches
in the structure from motion pipeline. The statistics are based on consistency
constraints that arise within the clustered structure of the graph of keypoint
matches. The statistics are designed to give smaller values to corrupted
matches and than uncorrupted matches. These new statistics are combined with an
iterative reweighting scheme to filter keypoints, which can then be fed into
any standard structure from motion pipeline. This filtering method can be
efficiently implemented and scaled to massive datasets as it only requires
sparse matrix multiplication. We demonstrate the efficacy of this method on
synthetic and real structure from motion datasets and show that it achieves
state-of-the-art accuracy and speed in these tasks.
Related papers
- Hierarchical Clustering With Confidence [6.4793198569929356]
Agglomerative hierarchical clustering is highly sensitive to small perturbations in the data.<n>We show how randomizing hierarchical clustering can be useful not just for measuring stability but also for designing valid hypothesis testing procedures.
arXiv Detail & Related papers (2025-12-06T18:18:20Z) - Revisiting Dynamic Graph Clustering via Matrix Factorization [26.290080380814196]
Matrix factorization-based methods are promising approaches for this task.
They tend to lack robustness and are vulnerable to real-world noisy data.
We propose temporal separated matrix factorization, bi-clustering regularization, and selective embedding updating.
arXiv Detail & Related papers (2025-02-10T02:57:46Z) - CoHiRF: A Scalable and Interpretable Clustering Framework for High-Dimensional Data [0.30723404270319693]
We propose Consensus Hierarchical Random Feature (CoHiRF), a novel clustering method designed to address challenges effectively.
CoHiRF leverages random feature selection to mitigate noise and dimensionality effects, repeatedly applies K-Means clustering in reduced feature spaces, and combines results through a unanimous consensus criterion.
CoHiRF is computationally efficient with a running time comparable to K-Means, scalable to massive datasets, and exhibits robust performance against state-of-the-art methods such as SC-SRGF, HDBSCAN, and OPTICS.
arXiv Detail & Related papers (2025-02-01T09:38:44Z) - Induced Covariance for Causal Discovery in Linear Sparse Structures [55.2480439325792]
Causal models seek to unravel the cause-effect relationships among variables from observed data.
This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships.
arXiv Detail & Related papers (2024-10-02T04:01:38Z) - ComboStoc: Combinatorial Stochasticity for Diffusion Generative Models [65.82630283336051]
We show that the space spanned by the combination of dimensions and attributes is insufficiently sampled by existing training scheme of diffusion generative models.
We present a simple fix to this problem by constructing processes that fully exploit the structures, hence the name ComboStoc.
arXiv Detail & Related papers (2024-05-22T15:23:10Z) - Efficient Similarity-based Passive Filter Pruning for Compressing CNNs [23.661189257759535]
Convolution neural networks (CNNs) have shown great success in various applications.
computational complexity and memory storage of CNNs is a bottleneck for their deployment on resource-constrained devices.
Recent efforts towards reducing the computation cost and the memory overhead of CNNs involve similarity-based passive filter pruning methods.
arXiv Detail & Related papers (2022-10-27T09:57:47Z) - Autoencoder Based Iterative Modeling and Multivariate Time-Series
Subsequence Clustering Algorithm [0.0]
This paper introduces an algorithm for the detection of change-points and the identification of the corresponding subsequences in transient time-series data (MTSD)
We use a recurrent neural network (RNN) based Autoencoder (AE) which is iteratively trained on incoming data.
A model of the identified subsequence is saved and used for recognition of repeating subsequences as well as fast offline clustering.
arXiv Detail & Related papers (2022-09-09T09:59:56Z) - Examining and Combating Spurious Features under Distribution Shift [94.31956965507085]
We define and analyze robust and spurious representations using the information-theoretic concept of minimal sufficient statistics.
We prove that even when there is only bias of the input distribution, models can still pick up spurious features from their training data.
Inspired by our analysis, we demonstrate that group DRO can fail when groups do not directly account for various spurious correlations.
arXiv Detail & Related papers (2021-06-14T05:39:09Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - Missing Value Imputation on Multidimensional Time Series [16.709162372224355]
We present DeepMVI, a deep learning method for missing value imputation in multidimensional time-series datasets.
DeepMVI combines fine-grained and coarse-grained patterns along a time series, and trends from related series across categorical dimensions.
Experiments show that DeepMVI is significantly more accurate, reducing error by more than 50% in more than half the cases.
arXiv Detail & Related papers (2021-03-02T09:55:05Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Robustness to Missing Features using Hierarchical Clustering with Split
Neural Networks [39.29536042476913]
We propose a simple yet effective approach that clusters similar input features together using hierarchical clustering.
We evaluate this approach on a series of benchmark datasets and show promising improvements even with simple imputation techniques.
arXiv Detail & Related papers (2020-11-19T00:35:08Z) - New advances in enumerative biclustering algorithms with online
partitioning [80.22629846165306]
This paper further extends RIn-Close_CVC, a biclustering algorithm capable of performing an efficient, complete, correct and non-redundant enumeration of maximal biclusters with constant values on columns in numerical datasets.
The improved algorithm is called RIn-Close_CVC3, keeps those attractive properties of RIn-Close_CVC, and is characterized by: a drastic reduction in memory usage; a consistent gain in runtime.
arXiv Detail & Related papers (2020-03-07T14:54:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.