New Methods for Detecting Concentric Objects With High Accuracy
- URL: http://arxiv.org/abs/2103.05104v1
- Date: Tue, 16 Feb 2021 08:19:18 GMT
- Title: New Methods for Detecting Concentric Objects With High Accuracy
- Authors: Ali A. Al-Sharadqah and Lorenzo Rull
- Abstract summary: Fitting geometric objects to digitized data is an important problem in many areas such as iris detection, autonomous navigation, and industrial robotics operations.
There are two common approaches to fitting geometric shapes to data: the geometric (iterative) approach and algebraic (non-iterative) approach.
We develop new estimators, which can be used as reliable initial guesses for other iterative methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fitting concentric geometric objects to digitized data is an important
problem in many areas such as iris detection, autonomous navigation, and
industrial robotics operations. There are two common approaches to fitting
geometric shapes to data: the geometric (iterative) approach and algebraic
(non-iterative) approach. The geometric approach is a nonlinear iterative
method that minimizes the sum of the squares of Euclidean distances of the
observed points to the ellipses and regarded as the most accurate method, but
it needs a good initial guess to improve the convergence rate. The algebraic
approach is based on minimizing the algebraic distances with some constraints
imposed on parametric space. Each algebraic method depends on the imposed
constraint, and it can be solved with the aid of the generalized eigenvalue
problem. Only a few methods in literature were developed to solve the problem
of concentric ellipses. Here we study the statistical properties of existing
methods by firstly establishing a general mathematical and statistical
framework for this problem. Using rigorous perturbation analysis, we derive the
variances and biasedness of each method under the small-sigma model. We also
develop new estimators, which can be used as reliable initial guesses for other
iterative methods. Then we compare the performance of each method according to
their theoretical accuracy. Not only do our methods described here outperform
other existing non-iterative methods, they are also quite robust against large
noise. These methods and their practical performances are assessed by a series
of numerical experiments on both synthetic and real data.
Related papers
- Riemannian stochastic optimization methods avoid strict saddle points [68.80251170757647]
We show that policies under study avoid strict saddle points / submanifolds with probability 1.
This result provides an important sanity check as it shows that, almost always, the limit state of an algorithm can only be a local minimizer.
arXiv Detail & Related papers (2023-11-04T11:12:24Z) - Optimised Least Squares Approach for Accurate Polygon and Ellipse
Fitting [0.0]
The method is validated on synthetic and real-world data sets.
The proposed method is a powerful tool for shape fitting in computer vision and geometry processing applications.
arXiv Detail & Related papers (2023-07-13T02:31:06Z) - Automated differential equation solver based on the parametric
approximation optimization [77.34726150561087]
The article presents a method that uses an optimization algorithm to obtain a solution using the parameterized approximation.
It allows solving the wide class of equations in an automated manner without the algorithm's parameters change.
arXiv Detail & Related papers (2022-05-11T10:06:47Z) - Robust online joint state/input/parameter estimation of linear systems [0.0]
This paper presents a method for jointly estimating the state, input, and parameters of linear systems in an online fashion.
The method is specially designed for measurements that are corrupted with non-Gaussian noise or outliers.
arXiv Detail & Related papers (2022-04-12T09:41:28Z) - Manifold Hypothesis in Data Analysis: Double Geometrically-Probabilistic
Approach to Manifold Dimension Estimation [92.81218653234669]
We present new approach to manifold hypothesis checking and underlying manifold dimension estimation.
Our geometrical method is a modification for sparse data of a well-known box-counting algorithm for Minkowski dimension calculation.
Experiments on real datasets show that the suggested approach based on two methods combination is powerful and effective.
arXiv Detail & Related papers (2021-07-08T15:35:54Z) - Exploiting Local Convergence of Quasi-Newton Methods Globally: Adaptive
Sample Size Approach [33.21301562890201]
We use an adaptive sample size scheme that exploits the superlinear convergence of quasi-Newton methods globally and throughout the learning process.
We show that if the initial sample size is sufficiently large and we use quasi-Newton methods to solve each subproblem, the subproblems can be solved superlinearly fast.
arXiv Detail & Related papers (2021-06-10T01:08:51Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - Iterative Pre-Conditioning for Expediting the Gradient-Descent Method:
The Distributed Linear Least-Squares Problem [0.966840768820136]
This paper considers the multi-agent linear least-squares problem in a server-agent network.
The goal for the agents is to compute a linear model that optimally fits the collective data points held by all the agents, without sharing their individual local data points.
We propose an iterative pre-conditioning technique that mitigates the deleterious effect of the conditioning of data points on the rate of convergence of the gradient-descent method.
arXiv Detail & Related papers (2020-08-06T20:01:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.