New Methods for Detecting Concentric Objects With High Accuracy
- URL: http://arxiv.org/abs/2103.05104v1
- Date: Tue, 16 Feb 2021 08:19:18 GMT
- Title: New Methods for Detecting Concentric Objects With High Accuracy
- Authors: Ali A. Al-Sharadqah and Lorenzo Rull
- Abstract summary: Fitting geometric objects to digitized data is an important problem in many areas such as iris detection, autonomous navigation, and industrial robotics operations.
There are two common approaches to fitting geometric shapes to data: the geometric (iterative) approach and algebraic (non-iterative) approach.
We develop new estimators, which can be used as reliable initial guesses for other iterative methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fitting concentric geometric objects to digitized data is an important
problem in many areas such as iris detection, autonomous navigation, and
industrial robotics operations. There are two common approaches to fitting
geometric shapes to data: the geometric (iterative) approach and algebraic
(non-iterative) approach. The geometric approach is a nonlinear iterative
method that minimizes the sum of the squares of Euclidean distances of the
observed points to the ellipses and regarded as the most accurate method, but
it needs a good initial guess to improve the convergence rate. The algebraic
approach is based on minimizing the algebraic distances with some constraints
imposed on parametric space. Each algebraic method depends on the imposed
constraint, and it can be solved with the aid of the generalized eigenvalue
problem. Only a few methods in literature were developed to solve the problem
of concentric ellipses. Here we study the statistical properties of existing
methods by firstly establishing a general mathematical and statistical
framework for this problem. Using rigorous perturbation analysis, we derive the
variances and biasedness of each method under the small-sigma model. We also
develop new estimators, which can be used as reliable initial guesses for other
iterative methods. Then we compare the performance of each method according to
their theoretical accuracy. Not only do our methods described here outperform
other existing non-iterative methods, they are also quite robust against large
noise. These methods and their practical performances are assessed by a series
of numerical experiments on both synthetic and real data.
Related papers
- Sample-Efficient Geometry Reconstruction from Euclidean Distances using Non-Convex Optimization [7.114174944371803]
The problem of finding suitable point embedding Euclidean distance information point pairs arises both as a core task and as a sub-machine learning learning problem.
In this paper, we aim to solve this problem given a minimal number of samples.
arXiv Detail & Related papers (2024-10-22T13:02:12Z) - Disentangled Representation Learning with the Gromov-Monge Gap [65.73194652234848]
Learning disentangled representations from unlabelled data is a fundamental challenge in machine learning.
We introduce a novel approach to disentangled representation learning based on quadratic optimal transport.
We demonstrate the effectiveness of our approach for quantifying disentanglement across four standard benchmarks.
arXiv Detail & Related papers (2024-07-10T16:51:32Z) - Riemannian stochastic optimization methods avoid strict saddle points [68.80251170757647]
We show that policies under study avoid strict saddle points / submanifolds with probability 1.
This result provides an important sanity check as it shows that, almost always, the limit state of an algorithm can only be a local minimizer.
arXiv Detail & Related papers (2023-11-04T11:12:24Z) - Optimised Least Squares Approach for Accurate Polygon and Ellipse
Fitting [0.0]
The method is validated on synthetic and real-world data sets.
The proposed method is a powerful tool for shape fitting in computer vision and geometry processing applications.
arXiv Detail & Related papers (2023-07-13T02:31:06Z) - Automated differential equation solver based on the parametric
approximation optimization [77.34726150561087]
The article presents a method that uses an optimization algorithm to obtain a solution using the parameterized approximation.
It allows solving the wide class of equations in an automated manner without the algorithm's parameters change.
arXiv Detail & Related papers (2022-05-11T10:06:47Z) - Robust online joint state/input/parameter estimation of linear systems [0.0]
This paper presents a method for jointly estimating the state, input, and parameters of linear systems in an online fashion.
The method is specially designed for measurements that are corrupted with non-Gaussian noise or outliers.
arXiv Detail & Related papers (2022-04-12T09:41:28Z) - Manifold Hypothesis in Data Analysis: Double Geometrically-Probabilistic
Approach to Manifold Dimension Estimation [92.81218653234669]
We present new approach to manifold hypothesis checking and underlying manifold dimension estimation.
Our geometrical method is a modification for sparse data of a well-known box-counting algorithm for Minkowski dimension calculation.
Experiments on real datasets show that the suggested approach based on two methods combination is powerful and effective.
arXiv Detail & Related papers (2021-07-08T15:35:54Z) - Exploiting Local Convergence of Quasi-Newton Methods Globally: Adaptive
Sample Size Approach [33.21301562890201]
We use an adaptive sample size scheme that exploits the superlinear convergence of quasi-Newton methods globally and throughout the learning process.
We show that if the initial sample size is sufficiently large and we use quasi-Newton methods to solve each subproblem, the subproblems can be solved superlinearly fast.
arXiv Detail & Related papers (2021-06-10T01:08:51Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.