Automatic classification of deformable shapes
- URL: http://arxiv.org/abs/2211.02530v1
- Date: Fri, 4 Nov 2022 15:44:56 GMT
- Title: Automatic classification of deformable shapes
- Authors: Hossein Dabirian and Radmir Sultamuratov and James Herring and Carlos
El Tallawi and William Zoghbi and Andreas Mang and Robert Azencott
- Abstract summary: Let $mathcalD$ be a dataset of smooth 3D-surfaces, partitioned into disjoint classes $mathitCL_j$, $j= 1, ldots, k$.
We show how optimized diffeomorphic registration applied to large numbers of pairs $S,S' in mathcalD$ to implement automatic classification on $mathcalD$.
We generate classifiers invariant by rigid motions in $mathbbR3$.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Let $\mathcal{D}$ be a dataset of smooth 3D-surfaces, partitioned into
disjoint classes $\mathit{CL}_j$, $j= 1, \ldots, k$. We show how optimized
diffeomorphic registration applied to large numbers of pairs $S,S' \in
\mathcal{D}$ can provide descriptive feature vectors to implement automatic
classification on $\mathcal{D}$, and generate classifiers invariant by rigid
motions in $\mathbb{R}^3$. To enhance accuracy of automatic classification, we
enrich the smallest classes $\mathit{CL}_j$ by diffeomorphic interpolation of
smooth surfaces between pairs $S,S' \in \mathit{CL}_j$. We also implement small
random perturbations of surfaces $S\in \mathit{CL}_j$ by random flows of smooth
diffeomorphisms $F_t:\mathbb{R}^3 \to \mathbb{R}^3$. Finally, we test our
automatic classification methods on a cardiology data base of discretized
mitral valve surfaces.
Related papers
- A Theory of Interpretable Approximations [61.90216959710842]
We study the idea of approximating a target concept $c$ by a small aggregation of concepts from some base class $mathcalH$.
For any given pair of $mathcalH$ and $c$, exactly one of these cases holds: (i) $c$ cannot be approximated by $mathcalH$ with arbitrary accuracy.
We show that, in the case of interpretable approximations, even a slightly nontrivial a-priori guarantee on the complexity of approximations implies approximations with constant (distribution-free and accuracy-
arXiv Detail & Related papers (2024-06-15T06:43:45Z) - Provably learning a multi-head attention layer [55.2904547651831]
Multi-head attention layer is one of the key components of the transformer architecture that sets it apart from traditional feed-forward models.
In this work, we initiate the study of provably learning a multi-head attention layer from random examples.
We prove computational lower bounds showing that in the worst case, exponential dependence on $m$ is unavoidable.
arXiv Detail & Related papers (2024-02-06T15:39:09Z) - Statistical learning on measures: an application to persistence diagrams [0.0]
We consider a binary supervised learning classification problem where instead of having data in a finite-dimensional Euclidean space, we observe measures on a compact space $mathcalX$.
We show that our framework allows more flexibility and diversity in the input data we can deal with.
While such a framework has many possible applications, this work strongly emphasizes on classifying data via topological descriptors called persistence diagrams.
arXiv Detail & Related papers (2023-03-15T09:01:37Z) - Statistical Learning under Heterogeneous Distribution Shift [71.8393170225794]
Ground-truth predictor is additive $mathbbE[mathbfz mid mathbfx,mathbfy] = f_star(mathbfx) +g_star(mathbfy)$.
arXiv Detail & Related papers (2023-02-27T16:34:21Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Metric Hypertransformers are Universal Adapted Maps [4.83420384410068]
metric hypertransformers (MHTs) are capable of approxing any adapted map $F:mathscrXmathbbZrightarrow mathscrYmathbbZ$ with approximable complexity.
Our results provide the first (quantimat) universal approximation theorem compatible with any such $mathscrX$ and $mathscrY$.
arXiv Detail & Related papers (2022-01-31T10:03:46Z) - Random matrices in service of ML footprint: ternary random features with
no performance loss [55.30329197651178]
We show that the eigenspectrum of $bf K$ is independent of the distribution of the i.i.d. entries of $bf w$.
We propose a novel random technique, called Ternary Random Feature (TRF)
The computation of the proposed random features requires no multiplication and a factor of $b$ less bits for storage compared to classical random features.
arXiv Detail & Related papers (2021-10-05T09:33:49Z) - Spectral properties of sample covariance matrices arising from random
matrices with independent non identically distributed columns [50.053491972003656]
It was previously shown that the functionals $texttr(AR(z))$, for $R(z) = (frac1nXXT- zI_p)-1$ and $Ain mathcal M_p$ deterministic, have a standard deviation of order $O(|A|_* / sqrt n)$.
Here, we show that $|mathbb E[R(z)] - tilde R(z)|_F
arXiv Detail & Related papers (2021-09-06T14:21:43Z) - Universal Regular Conditional Distributions via Probability
Measure-Valued Deep Neural Models [3.8073142980733]
We find that any model built using the proposed framework is dense in the space $C(mathcalX,mathcalP_1(mathcalY))$.
The proposed models are also shown to be capable of generically expressing the aleatoric uncertainty present in most randomized machine learning models.
arXiv Detail & Related papers (2021-05-17T11:34:09Z) - Learners' languages [0.0]
Authors show that the fundamental elements of deep learning -- gradient descent and backpropagation -- can be conceptualized as a strong monoidal functor.
We show that a map $Ato B$ in $mathbfPara(mathbfSLens)$ has a natural interpretation in terms of dynamical systems.
arXiv Detail & Related papers (2021-03-01T18:34:00Z) - Learning a Lie Algebra from Unlabeled Data Pairs [7.329382191592538]
Deep convolutional networks (convnets) show a remarkable ability to learn disentangled representations.
This article proposes a machine learning method to discover a nonlinear transformation of the space $mathbbRn$.
The key idea is to approximate every target $boldsymboly_i$ by a matrix--vector product of the form $boldsymbolwidetildey_i = boldsymbolphi(t_i) boldsymbolx_i$.
arXiv Detail & Related papers (2020-09-19T23:23:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.