Latent Semantic Consensus For Deterministic Geometric Model Fitting
- URL: http://arxiv.org/abs/2403.06444v1
- Date: Mon, 11 Mar 2024 05:35:38 GMT
- Title: Latent Semantic Consensus For Deterministic Geometric Model Fitting
- Authors: Guobao Xiao and Jun Yu and Jiayi Ma and Deng-Ping Fan and Ling Shao
- Abstract summary: We propose an effective method called Latent Semantic Consensus (LSC)
LSC formulates the model fitting problem into two latent semantic spaces based on data points and model hypotheses.
LSC is able to provide consistent and reliable solutions within only a few milliseconds for general multi-structural model fitting.
- Score: 109.44565542031384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimating reliable geometric model parameters from the data with severe
outliers is a fundamental and important task in computer vision. This paper
attempts to sample high-quality subsets and select model instances to estimate
parameters in the multi-structural data. To address this, we propose an
effective method called Latent Semantic Consensus (LSC). The principle of LSC
is to preserve the latent semantic consensus in both data points and model
hypotheses. Specifically, LSC formulates the model fitting problem into two
latent semantic spaces based on data points and model hypotheses, respectively.
Then, LSC explores the distributions of points in the two latent semantic
spaces, to remove outliers, generate high-quality model hypotheses, and
effectively estimate model instances. Finally, LSC is able to provide
consistent and reliable solutions within only a few milliseconds for general
multi-structural model fitting, due to its deterministic fitting nature and
efficiency. Compared with several state-of-the-art model fitting methods, our
LSC achieves significant superiority for the performance of both accuracy and
speed on synthetic data and real images. The code will be available at
https://github.com/guobaoxiao/LSC.
Related papers
- Characterizing Disparity Between Edge Models and High-Accuracy Base Models for Vision Tasks [5.081175754775484]
We introduce XDELTA, a novel explainable AI tool that explains differences between a high-accuracy base model and a computationally efficient but lower-accuracy edge model.
We conduct a comprehensive evaluation to test XDELTA's ability to explain model discrepancies, using over 1.2 million images and 24 models, and assessing real-world deployments with six participants.
arXiv Detail & Related papers (2024-07-13T22:05:58Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Sample Complexity Characterization for Linear Contextual MDPs [67.79455646673762]
Contextual decision processes (CMDPs) describe a class of reinforcement learning problems in which the transition kernels and reward functions can change over time with different MDPs indexed by a context variable.
CMDPs serve as an important framework to model many real-world applications with time-varying environments.
We study CMDPs under two linear function approximation models: Model I with context-varying representations and common linear weights for all contexts; and Model II with common representations for all contexts and context-varying linear weights.
arXiv Detail & Related papers (2024-02-05T03:25:04Z) - The Languini Kitchen: Enabling Language Modelling Research at Different
Scales of Compute [66.84421705029624]
We introduce an experimental protocol that enables model comparisons based on equivalent compute, measured in accelerator hours.
We pre-process an existing large, diverse, and high-quality dataset of books that surpasses existing academic benchmarks in quality, diversity, and document length.
This work also provides two baseline models: a feed-forward model derived from the GPT-2 architecture and a recurrent model in the form of a novel LSTM with ten-fold throughput.
arXiv Detail & Related papers (2023-09-20T10:31:17Z) - Revisiting the Evaluation of Image Synthesis with GANs [55.72247435112475]
This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models.
In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
arXiv Detail & Related papers (2023-04-04T17:54:32Z) - Latent Space Model for Higher-order Networks and Generalized Tensor
Decomposition [18.07071669486882]
We introduce a unified framework, formulated as general latent space models, to study complex higher-order network interactions.
We formulate the relationship between the latent positions and the observed data via a generalized multilinear kernel as the link function.
We demonstrate the effectiveness of our method on synthetic data.
arXiv Detail & Related papers (2021-06-30T13:11:17Z) - Improved Prediction and Network Estimation Using the Monotone Single
Index Multi-variate Autoregressive Model [34.529641317832024]
We develop a semi-parametric approach based on the monotone single-index multi-variate autoregressive model (SIMAM)
We provide theoretical guarantees for dependent data and an alternating projected gradient descent algorithm.
We demonstrate the superior performance both on simulated data and two real data examples.
arXiv Detail & Related papers (2021-06-28T12:32:29Z) - Hierarchical Representation via Message Propagation for Robust Model
Fitting [28.03005930782681]
We propose a novel hierarchical representation via message propagation (HRMP) method for robust model fitting.
We formulate the consensus information and the preference information as a hierarchical representation to alleviate the sensitivity to gross outliers.
The proposed HRMP can not only accurately estimate the number and parameters of multiple model instances, but also handle multi-structural data contaminated with a large number of outliers.
arXiv Detail & Related papers (2020-12-29T04:14:19Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z) - Amortized Bayesian model comparison with evidential deep learning [0.12314765641075436]
We propose a novel method for performing Bayesian model comparison using specialized deep learning architectures.
Our method is purely simulation-based and circumvents the step of explicitly fitting all alternative models under consideration to each observed dataset.
We show that our method achieves excellent results in terms of accuracy, calibration, and efficiency across the examples considered in this work.
arXiv Detail & Related papers (2020-04-22T15:15:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.