DGSAC: Density Guided Sampling and Consensus
- URL: http://arxiv.org/abs/2006.02413v1
- Date: Wed, 3 Jun 2020 17:42:53 GMT
- Title: DGSAC: Density Guided Sampling and Consensus
- Authors: Lokender Tiwari and Saket Anand
- Abstract summary: Kernel Residual Density is a key differentiator between inliers and outliers.
We propose two model selection algorithms, an optimal quadratic program based, and a greedy.
We evaluate our method on a wide variety of tasks like planar segmentation, motion segmentation, vanishing point estimation, plane fitting to 3D point cloud, line, and circle fitting.
- Score: 4.808421423598809
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robust multiple model fitting plays a crucial role in many computer vision
applications. Unlike single model fitting problems, the multi-model fitting has
additional challenges. The unknown number of models and the inlier noise scale
are the two most important of them, which are in general provided by the user
using ground-truth or some other auxiliary information. Mode seeking/
clustering-based approaches crucially depend on the quality of model hypotheses
generated. While preference analysis based guided sampling approaches have
shown remarkable performance, they operate in a time budget framework, and the
user provides the time as a reasonable guess. In this paper, we deviate from
the mode seeking and time budget framework. We propose a concept called Kernel
Residual Density (KRD) and apply it to various components of a multiple-model
fitting pipeline. The Kernel Residual Density act as a key differentiator
between inliers and outliers. We use KRD to guide and automatically stop the
sampling process. The sampling process stops after generating a set of
hypotheses that can explain all the data points. An explanation score is
maintained for each data point, which is updated on-the-fly. We propose two
model selection algorithms, an optimal quadratic program based, and a greedy.
Unlike mode seeking approaches, our model selection algorithms seek to find one
representative hypothesis for each genuine structure present in the data. We
evaluate our method (dubbed as DGSAC) on a wide variety of tasks like planar
segmentation, motion segmentation, vanishing point estimation, plane fitting to
3D point cloud, line, and circle fitting, which shows the effectiveness of our
method and its unified nature.
Related papers
- Latent Semantic Consensus For Deterministic Geometric Model Fitting [109.44565542031384]
We propose an effective method called Latent Semantic Consensus (LSC)
LSC formulates the model fitting problem into two latent semantic spaces based on data points and model hypotheses.
LSC is able to provide consistent and reliable solutions within only a few milliseconds for general multi-structural model fitting.
arXiv Detail & Related papers (2024-03-11T05:35:38Z) - PARSAC: Accelerating Robust Multi-Model Fitting with Parallel Sample
Consensus [26.366299016589256]
We present a real-time method for robust estimation of multiple instances of geometric models from noisy data.
A neural network segments the input data into clusters representing potential model instances.
We demonstrate state-of-the-art performance on these as well as multiple established datasets, with inference times as small as five milliseconds per image.
arXiv Detail & Related papers (2024-01-26T14:54:56Z) - Revisiting the Evaluation of Image Synthesis with GANs [55.72247435112475]
This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models.
In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
arXiv Detail & Related papers (2023-04-04T17:54:32Z) - CPPF++: Uncertainty-Aware Sim2Real Object Pose Estimation by Vote Aggregation [67.12857074801731]
We introduce a novel method, CPPF++, designed for sim-to-real pose estimation.
To address the challenge posed by vote collision, we propose a novel approach that involves modeling the voting uncertainty.
We incorporate several innovative modules, including noisy pair filtering, online alignment optimization, and a feature ensemble.
arXiv Detail & Related papers (2022-11-24T03:27:00Z) - Spectral goodness-of-fit tests for complete and partial network data [1.7188280334580197]
We use recent results in random matrix theory to derive a general goodness-of-fit test for dyadic data.
We show that our method, when applied to a specific model of interest, provides a straightforward, computationally fast way of selecting parameters.
Our method leads to improved community detection algorithms.
arXiv Detail & Related papers (2021-06-17T17:56:30Z) - Manifold Topology Divergence: a Framework for Comparing Data Manifolds [109.0784952256104]
We develop a framework for comparing data manifold, aimed at the evaluation of deep generative models.
Based on the Cross-Barcode, we introduce the Manifold Topology Divergence score (MTop-Divergence)
We demonstrate that the MTop-Divergence accurately detects various degrees of mode-dropping, intra-mode collapse, mode invention, and image disturbance.
arXiv Detail & Related papers (2021-06-08T00:30:43Z) - Finding Geometric Models by Clustering in the Consensus Space [61.65661010039768]
We propose a new algorithm for finding an unknown number of geometric models, e.g., homographies.
We present a number of applications where the use of multiple geometric models improves accuracy.
These include pose estimation from multiple generalized homographies; trajectory estimation of fast-moving objects.
arXiv Detail & Related papers (2021-03-25T14:35:07Z) - Reinforced Data Sampling for Model Diversification [15.547681142342846]
This paper proposes a new Reinforced Data Sampling (RDS) method to learn how to sample data adequately.
We formulate the optimisation problem of model diversification $delta-div$ in data sampling to maximise learning potentials and optimum allocation by injecting model diversity.
Our results suggest that the trainable sampling for model diversification is useful for competition organisers, researchers, or even starters to pursue full potentials of various machine learning tasks.
arXiv Detail & Related papers (2020-06-12T11:46:13Z) - CONSAC: Robust Multi-Model Fitting by Conditional Sample Consensus [62.86856923633923]
We present a robust estimator for fitting multiple parametric models of the same form to noisy measurements.
In contrast to previous works, which resorted to hand-crafted search strategies for multiple model detection, we learn the search strategy from data.
For self-supervised learning of the search, we evaluate the proposed algorithm on multi-homography estimation and demonstrate an accuracy that is superior to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-08T17:37:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.