Average degree of the essential variety
- URL: http://arxiv.org/abs/2212.01596v2
- Date: Fri, 10 Nov 2023 10:30:46 GMT
- Title: Average degree of the essential variety
- Authors: Paul Breiding and Samantha Fairchild and Pierpaola Santarsiero and
Elima Shehu
- Abstract summary: The degree of the essential variety is $10$, so this intersection consists of 10 complex points in general.
We compute the expected number of real intersection points when the linear space is random.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The essential variety is an algebraic subvariety of dimension $5$ in real
projective space $\mathbb R\mathrm P^{8}$ which encodes the relative pose of
two calibrated pinhole cameras. The $5$-point algorithm in computer vision
computes the real points in the intersection of the essential variety with a
linear space of codimension $5$. The degree of the essential variety is $10$,
so this intersection consists of 10 complex points in general.
We compute the expected number of real intersection points when the linear
space is random. We focus on two probability distributions for linear spaces.
The first distribution is invariant under the action of the orthogonal group
$\mathrm{O}(9)$ acting on linear spaces in $\mathbb R\mathrm P^{8}$. In this
case, the expected number of real intersection points is equal to $4$. The
second distribution is motivated from computer vision and is defined by
choosing 5 point correspondences in the image planes $\mathbb R\mathrm
P^2\times \mathbb R\mathrm P^2$ uniformly at random. A Monte Carlo computation
suggests that with high probability the expected value lies in the interval
$(3.95 - 0.05,\ 3.95 + 0.05)$.
Related papers
- Matrix Product Sketching via Coordinated Sampling [15.820518033589705]
We revisit the well-studied problem of approximating a matrix product, $mathbfATmathbfB$, based on small space sketches.
We prove that, when $mathbfA$ and $mathbfB$ are sparse, methods based on emphcoordinated random sampling can outperform classical linear sketching approaches.
arXiv Detail & Related papers (2025-01-29T18:35:38Z) - The Communication Complexity of Approximating Matrix Rank [50.6867896228563]
We show that this problem has randomized communication complexity $Omega(frac1kcdot n2log|mathbbF|)$.
As an application, we obtain an $Omega(frac1kcdot n2log|mathbbF|)$ space lower bound for any streaming algorithm with $k$ passes.
arXiv Detail & Related papers (2024-10-26T06:21:42Z) - Metric Dimension and Resolvability of Jaccard Spaces [49.1574468325115]
A subset of points in a metric space is said to resolve it if each point in the space is uniquely characterized by its distance to each point in the subset.
We show that a much smaller subset of $2X$ suffices to resolve, with high probability, all different pairs of subsets of cardinality at most $sqrt|X|/ln|X|$, up to a factor.
arXiv Detail & Related papers (2024-05-19T02:09:50Z) - Provably learning a multi-head attention layer [55.2904547651831]
Multi-head attention layer is one of the key components of the transformer architecture that sets it apart from traditional feed-forward models.
In this work, we initiate the study of provably learning a multi-head attention layer from random examples.
We prove computational lower bounds showing that in the worst case, exponential dependence on $m$ is unavoidable.
arXiv Detail & Related papers (2024-02-06T15:39:09Z) - High-dimensional Location Estimation via Norm Concentration for Subgamma
Vectors [15.802475232604667]
In location estimation, we are given $n$ samples from a known distribution $f$ shifted by an unknown translation $lambda$.
Asymptotically, the maximum likelihood estimate achieves the Cram'er-Rao bound of error $mathcal N(0, frac1nmathcal I)$.
We build on the theory using emphsmoothed estimators to bound the error for finite $n$ in terms of $mathcal I_r$, the Fisher information of the $r$-smoothed
arXiv Detail & Related papers (2023-02-05T22:17:04Z) - Metricizing the Euclidean Space towards Desired Distance Relations in
Point Clouds [1.2366208723499545]
We attack unsupervised learning algorithms, specifically $k$-Means and density-based (DBSCAN) clustering algorithms.
We show that the results of clustering algorithms may not generally be trustworthy, unless there is a standardized and fixed prescription to use a specific distance function.
arXiv Detail & Related papers (2022-11-07T16:37:29Z) - A random matrix model for random approximate $t$-designs [1.534667887016089]
A random matrix model is proposed to describe the probability distribution of $delta(nu_mathcalS,t)$ for any $t$.
We show that our model satisfies the so-called spectral gap conjecture, i.e. we prove that with the probability $1$ there is $tinmathbbZ_+$ such that $sup satisfieskinmathbbZ_+delta(k)=delta(t)$.
arXiv Detail & Related papers (2022-10-14T14:50:06Z) - Matrix concentration inequalities and efficiency of random universal
sets of quantum gates [0.0]
For a random set $mathcalS subset U(d)$ of quantum gates we provide bounds on the probability that $mathcalS$ forms a $delta$-approximate $t$-design.
We show that for $mathcalS$ drawn from an exact $t$-design the probability that it forms a $delta$-approximate $t$-design satisfies the inequality $mathbbPleft(delta geq x right)leq 2D_t
arXiv Detail & Related papers (2022-02-10T23:44:09Z) - Random matrices in service of ML footprint: ternary random features with
no performance loss [55.30329197651178]
We show that the eigenspectrum of $bf K$ is independent of the distribution of the i.i.d. entries of $bf w$.
We propose a novel random technique, called Ternary Random Feature (TRF)
The computation of the proposed random features requires no multiplication and a factor of $b$ less bits for storage compared to classical random features.
arXiv Detail & Related papers (2021-10-05T09:33:49Z) - Spectral properties of sample covariance matrices arising from random
matrices with independent non identically distributed columns [50.053491972003656]
It was previously shown that the functionals $texttr(AR(z))$, for $R(z) = (frac1nXXT- zI_p)-1$ and $Ain mathcal M_p$ deterministic, have a standard deviation of order $O(|A|_* / sqrt n)$.
Here, we show that $|mathbb E[R(z)] - tilde R(z)|_F
arXiv Detail & Related papers (2021-09-06T14:21:43Z) - Variance-Aware Confidence Set: Variance-Dependent Bound for Linear
Bandits and Horizon-Free Bound for Linear Mixture MDP [76.94328400919836]
We show how to construct variance-aware confidence sets for linear bandits and linear mixture Decision Process (MDP)
For linear bandits, we obtain an $widetildeO(mathrmpoly(d)sqrt1 + sum_i=1Ksigma_i2) regret bound, where $d is the feature dimension.
For linear mixture MDP, we obtain an $widetildeO(mathrmpoly(d)sqrtK)$ regret bound, where
arXiv Detail & Related papers (2021-01-29T18:57:52Z) - Negative probabilities: What they are and what they are for [0.0]
An observation space $mathcal S$ is a family of probability distributions $langle P_i: iin I rangle$ sharing a common sample space $Omega$ in a consistent way.
A emphgrounding for $mathcal S$ is a signed probability distribution $mathcal P$ on $Omega$ yielding the correct marginal distribution $P_i$ for every $i$.
arXiv Detail & Related papers (2020-09-22T13:45:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.