Improved Learning via k-DTW: A Novel Dissimilarity Measure for Curves
- URL: http://arxiv.org/abs/2505.23431v1
- Date: Thu, 29 May 2025 13:25:45 GMT
- Title: Improved Learning via k-DTW: A Novel Dissimilarity Measure for Curves
- Authors: Amer Krivošija, Alexander Munteanu, André Nusser, Chris Schwiegelshohn,
- Abstract summary: We introduce a novel dissimilarity measure for polygonal curves called $k$-Dynamic Time Warping ($k$-DTW)<n>$k$-DTW has stronger metric properties than Dynamic Time Warping (DTW) and is more robust to outliers than the Fr'echet distance.<n>We show interesting properties of $k$-DTW and give an exact algorithm as well as a $(1+varepsilon)$-approximation algorithm for $k$-DTW.
- Score: 48.84313029960523
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces $k$-Dynamic Time Warping ($k$-DTW), a novel dissimilarity measure for polygonal curves. $k$-DTW has stronger metric properties than Dynamic Time Warping (DTW) and is more robust to outliers than the Fr\'{e}chet distance, which are the two gold standards of dissimilarity measures for polygonal curves. We show interesting properties of $k$-DTW and give an exact algorithm as well as a $(1+\varepsilon)$-approximation algorithm for $k$-DTW by a parametric search for the $k$-th largest matched distance. We prove the first dimension-free learning bounds for curves and further learning theoretic results. $k$-DTW not only admits smaller sample size than DTW for the problem of learning the median of curves, where some factors depending on the curves' complexity $m$ are replaced by $k$, but we also show a surprising separation on the associated Rademacher and Gaussian complexities: $k$-DTW admits strictly smaller bounds than DTW, by a factor $\tilde\Omega(\sqrt{m})$ when $k\ll m$. We complement our theoretical findings with an experimental illustration of the benefits of using $k$-DTW for clustering and nearest neighbor classification.
Related papers
- Metric Embeddings Beyond Bi-Lipschitz Distortion via Sherali-Adams [34.7582575446942]
We give the first approximation algorithm for MDS with quasi-polynomial dependency on $Delta.<n>Our algorithms are based on a novel geometry-aware analysis of a conditional rounding of the Sherali-Adams LP.
arXiv Detail & Related papers (2023-11-29T17:42:05Z) - Online Learning with Adversaries: A Differential-Inclusion Analysis [52.43460995467893]
We introduce an observation-matrix-based framework for fully asynchronous online Federated Learning with adversaries.
Our main result is that the proposed algorithm almost surely converges to the desired mean $mu.$
We derive this convergence using a novel differential-inclusion-based two-timescale analysis.
arXiv Detail & Related papers (2023-04-04T04:32:29Z) - Variance-aware robust reinforcement learning with linear function
approximation with heavy-tailed rewards [6.932056534450556]
We present two algorithms, AdaOFUL and VARA, for online sequential decision-making in the presence of heavy-tailed rewards.
AdaOFUL achieves a state-of-the-art regret bound of $widetildemathcalObig.
VarA achieves a tighter variance-aware regret bound of $widetildemathcalO(dsqrtHmathcalG*K)$.
arXiv Detail & Related papers (2023-03-09T22:16:28Z) - Beyond Moments: Robustly Learning Affine Transformations with
Asymptotically Optimal Error [8.615625517708324]
We present a-time algorithm for learning an unknown affine transformation of the standard hypercube from samples.
Our algorithm is based on a new method that iteratively improves an estimate of the unknown affine transformation whenever the requirements of the certificate are not met.
arXiv Detail & Related papers (2023-02-23T19:13:30Z) - Computationally Efficient Horizon-Free Reinforcement Learning for Linear
Mixture MDPs [111.75736569611159]
We propose the first computationally efficient horizon-free algorithm for linear mixture MDPs.
Our algorithm adapts a weighted least square estimator for the unknown transitional dynamic.
This also improves upon the best-known algorithms in this setting when $sigma_k2$'s are known.
arXiv Detail & Related papers (2022-05-23T17:59:18Z) - Adversarially Robust Learning with Tolerance [8.658596218544774]
We study the problem of tolerant adversarial PAC learning with respect to metric perturbation sets.
We show that a variant of the natural perturb-and-smooth algorithm PAC learns any hypothesis class $mathcalH$ with VC dimension $v$ in the $gamma$-tolerant adversarial setting.
We additionally propose an alternative learning method which yields sample bounds with only linear dependence on the doubling dimension.
arXiv Detail & Related papers (2022-03-02T03:50:16Z) - Policy Optimization Using Semiparametric Models for Dynamic Pricing [1.3428344011390776]
We study the contextual dynamic pricing problem where the market value of a product is linear in its observed features plus some market noise.
We propose a dynamic statistical learning and decision-making policy that combines semiparametric estimation from a generalized linear model with an unknown link and online decision-making.
arXiv Detail & Related papers (2021-09-13T23:50:01Z) - Variance-Aware Confidence Set: Variance-Dependent Bound for Linear
Bandits and Horizon-Free Bound for Linear Mixture MDP [76.94328400919836]
We show how to construct variance-aware confidence sets for linear bandits and linear mixture Decision Process (MDP)
For linear bandits, we obtain an $widetildeO(mathrmpoly(d)sqrt1 + sum_i=1Ksigma_i2) regret bound, where $d is the feature dimension.
For linear mixture MDP, we obtain an $widetildeO(mathrmpoly(d)sqrtK)$ regret bound, where
arXiv Detail & Related papers (2021-01-29T18:57:52Z) - Small Covers for Near-Zero Sets of Polynomials and Learning Latent
Variable Models [56.98280399449707]
We show that there exists an $epsilon$-cover for $S$ of cardinality $M = (k/epsilon)O_d(k1/d)$.
Building on our structural result, we obtain significantly improved learning algorithms for several fundamental high-dimensional probabilistic models hidden variables.
arXiv Detail & Related papers (2020-12-14T18:14:08Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - Curse of Dimensionality on Randomized Smoothing for Certifiable
Robustness [151.67113334248464]
We show that extending the smoothing technique to defend against other attack models can be challenging.
We present experimental results on CIFAR to validate our theory.
arXiv Detail & Related papers (2020-02-08T22:02:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.