Characterizing how 'distributional' NLP corpora distance metrics are
- URL: http://arxiv.org/abs/2310.14829v1
- Date: Mon, 23 Oct 2023 11:48:23 GMT
- Title: Characterizing how 'distributional' NLP corpora distance metrics are
- Authors: Samuel Ackerman, George Kour, Eitan Farchi
- Abstract summary: We describe an abstract quality, called distributionality', of such metrics.
A non-distributional metric tends to use very local measurements.
A more distributional metric will, in contrast, better capture the distributions' overall distance.
- Score: 2.4921910293793412
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A corpus of vector-embedded text documents has some empirical distribution.
Given two corpora, we want to calculate a single metric of distance (e.g.,
Mauve, Frechet Inception) between them. We describe an abstract quality, called
`distributionality', of such metrics. A non-distributional metric tends to use
very local measurements, or uses global measurements in a way that does not
fully reflect the distributions' true distance. For example, if individual
pairwise nearest-neighbor distances are low, it may judge the two corpora to
have low distance, even if their two distributions are in fact far from each
other. A more distributional metric will, in contrast, better capture the
distributions' overall distance. We quantify this quality by constructing a
Known-Similarity Corpora set from two paraphrase corpora and calculating the
distance between paired corpora from it. The distances' trend shape as set
element separation increases should quantify the distributionality of the
metric. We propose that Average Hausdorff Distance and energy distance between
corpora are representative examples of non-distributional and distributional
distance metrics, to which other metrics can be compared, to evaluate how
distributional they are.
Related papers
- Computing the Distance between unbalanced Distributions -- The flat
Metric [0.0]
The flat metric generalizes the well-known Wasserstein distance W1 to the case that the distributions are of unequal total mass.
The core of the method is based on a neural network to determine on optimal test function realizing the distance between two measures.
arXiv Detail & Related papers (2023-08-02T09:30:22Z) - Fisher-Rao distance and pullback SPD cone distances between multivariate normal distributions [7.070726553564701]
We introduce a class of distances based on diffeomorphic embeddings of the normal manifold into a submanifold.
We show that the projective Hilbert distance on the cone yields a metric on the embedded normal submanifold.
We show how to use those distances in clustering tasks.
arXiv Detail & Related papers (2023-07-20T07:14:58Z) - Energy-Based Sliced Wasserstein Distance [47.18652387199418]
A key component of the sliced Wasserstein (SW) distance is the slicing distribution.
We propose to design the slicing distribution as an energy-based distribution that is parameter-free.
We then derive a novel sliced Wasserstein metric, energy-based sliced Waserstein (EBSW) distance.
arXiv Detail & Related papers (2023-04-26T14:28:45Z) - LMR: Lane Distance-Based Metric for Trajectory Prediction [10.83642398981694]
Currently established metrics are based on Euclidean distance, which means that errors are weighted equally in all directions.
We propose a new metric that is lane distance-based: Lane Miss Rate (LMR)
LMR is defined as the ratio of sequences that yield a miss.
arXiv Detail & Related papers (2023-04-12T13:59:04Z) - Kernel distance measures for time series, random fields and other
structured data [71.61147615789537]
kdiff is a novel kernel-based measure for estimating distances between instances of structured data.
It accounts for both self and cross similarities across the instances and is defined using a lower quantile of the distance distribution.
Some theoretical results are provided for separability conditions using kdiff as a distance measure for clustering and classification problems.
arXiv Detail & Related papers (2021-09-29T22:54:17Z) - On the capacity of deep generative networks for approximating
distributions [8.798333793391544]
We prove that neural networks can transform a one-dimensional source distribution to a distribution arbitrarily close to a high-dimensional target distribution in Wasserstein distances.
It is shown that the approximation error grows at most linearly on the ambient dimension.
$f$-divergences are less adequate than Waserstein distances as metrics of distributions for generating samples.
arXiv Detail & Related papers (2021-01-29T01:45:02Z) - Linear Optimal Transport Embedding: Provable Wasserstein classification
for certain rigid transformations and perturbations [79.23797234241471]
Discriminating between distributions is an important problem in a number of scientific fields.
The Linear Optimal Transportation (LOT) embeds the space of distributions into an $L2$-space.
We demonstrate the benefits of LOT on a number of distribution classification problems.
arXiv Detail & Related papers (2020-08-20T19:09:33Z) - On the Relation between Quality-Diversity Evaluation and
Distribution-Fitting Goal in Text Generation [86.11292297348622]
We show that a linear combination of quality and diversity constitutes a divergence metric between the generated distribution and the real distribution.
We propose CR/NRR as a substitute for quality/diversity metric pair.
arXiv Detail & Related papers (2020-07-03T04:06:59Z) - Rethink Maximum Mean Discrepancy for Domain Adaptation [77.2560592127872]
This paper theoretically proves two essential facts: 1) minimizing the Maximum Mean Discrepancy equals to maximize the source and target intra-class distances respectively but jointly minimize their variance with some implicit weights, so that the feature discriminability degrades.
Experiments on several benchmark datasets not only prove the validity of theoretical results but also demonstrate that our approach could perform better than the comparative state-of-art methods substantially.
arXiv Detail & Related papers (2020-07-01T18:25:10Z) - Towards Certified Robustness of Distance Metric Learning [53.96113074344632]
We advocate imposing an adversarial margin in the input space so as to improve the generalization and robustness of metric learning algorithms.
We show that the enlarged margin is beneficial to the generalization ability by using the theoretical technique of algorithmic robustness.
arXiv Detail & Related papers (2020-06-10T16:51:53Z) - Theoretical Guarantees for Bridging Metric Measure Embedding and Optimal
Transport [18.61019008000831]
We consider a method allowing to embed the metric measure spaces in a common Euclidean space and compute an optimal transport (OT) on the embedded distributions.
This leads to what we call a sub-embedding robust Wasserstein (SERW) distance.
arXiv Detail & Related papers (2020-02-19T17:52:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.