Dendrogram distance: an evaluation metric for generative networks using
hierarchical clustering
- URL: http://arxiv.org/abs/2311.16894v1
- Date: Tue, 28 Nov 2023 15:46:12 GMT
- Title: Dendrogram distance: an evaluation metric for generative networks using
hierarchical clustering
- Authors: Gustavo Sutter Carvalho and Moacir Antonelli Ponti
- Abstract summary: We present a novel metric for generative modeling evaluation, focusing primarily on generative networks.
The method uses dendrograms to represent real and fake data, allowing for the divergence between training and generated samples to be computed.
- Score: 2.4283303315272713
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a novel metric for generative modeling evaluation, focusing
primarily on generative networks. The method uses dendrograms to represent real
and fake data, allowing for the divergence between training and generated
samples to be computed. This metric focus on mode collapse, targeting
generators that are not able to capture all modes in the training set. To
evaluate the proposed method it is introduced a validation scheme based on
sampling from real datasets, therefore the metric is evaluated in a controlled
environment and proves to be competitive with other state-of-the-art
approaches.
Related papers
- Model Evaluation and Anomaly Detection in Temporal Complex Networks using Deep Learning Methods [0.0]
This paper proposes an automatic approach based on deep learning to handle the issue of results evaluation for temporal network models.
In addition to an evaluation method, the proposed method can also be used for anomaly detection in evolving networks.
arXiv Detail & Related papers (2024-06-15T09:19:09Z) - Generative modeling of density regression through tree flows [3.0262553206264893]
We propose a flow-based generative model tailored for the density regression task on tabular data.
We introduce a training algorithm for fitting the tree-based transforms using a divide-and-conquer strategy.
Our method consistently achieves comparable or superior performance at a fraction of the training and sampling budget.
arXiv Detail & Related papers (2024-06-07T21:07:35Z) - Time-series Generation by Contrastive Imitation [87.51882102248395]
We study a generative framework that seeks to combine the strengths of both: Motivated by a moment-matching objective to mitigate compounding error, we optimize a local (but forward-looking) transition policy.
At inference, the learned policy serves as the generator for iterative sampling, and the learned energy serves as a trajectory-level measure for evaluating sample quality.
arXiv Detail & Related papers (2023-11-02T16:45:25Z) - Supervised Homography Learning with Realistic Dataset Generation [60.934401870005026]
We propose an iterative framework, which consists of two phases: a generation phase and a training phase.
In the generation phase, given an unlabeled image pair, we utilize the pre-estimated dominant plane masks and homography of the pair.
In the training phase, the generated data is used to train the supervised homography network.
arXiv Detail & Related papers (2023-07-28T07:03:18Z) - Metric Distribution to Vector: Constructing Data Representation via
Broad-Scale Discrepancies [15.40538348604094]
We present a novel embedding strategy named $mathbfMetricDistribution2vec$ to extract distribution characteristics into the vectorial representation for each data.
We demonstrate the application and effectiveness of our representation method in the supervised prediction tasks on extensive real-world structural graph datasets.
arXiv Detail & Related papers (2022-10-02T03:18:30Z) - VAESim: A probabilistic approach for self-supervised prototype discovery [0.23624125155742057]
We propose an architecture for image stratification based on a conditional variational autoencoder.
We use a continuous latent space to represent the continuum of disorders and find clusters during training, which can then be used for image/patient stratification.
We demonstrate that our method outperforms baselines in terms of kNN accuracy measured on a classification task against a standard VAE.
arXiv Detail & Related papers (2022-09-25T17:55:31Z) - Data-heterogeneity-aware Mixing for Decentralized Learning [63.83913592085953]
We characterize the dependence of convergence on the relationship between the mixing weights of the graph and the data heterogeneity across nodes.
We propose a metric that quantifies the ability of a graph to mix the current gradients.
Motivated by our analysis, we propose an approach that periodically and efficiently optimize the metric.
arXiv Detail & Related papers (2022-04-13T15:54:35Z) - Adversarial Manifold Matching via Deep Metric Learning for Generative
Modeling [5.5840609705075055]
We propose a manifold matching approach to generative models which includes a distribution generator and a metric generator.
The distribution generator aims at generating samples that follow some distribution condensed around the real data manifold.
The metric generator utilizes both real data and generated samples to learn a distance metric.
arXiv Detail & Related papers (2021-06-20T23:25:01Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - BREEDS: Benchmarks for Subpopulation Shift [98.90314444545204]
We develop a methodology for assessing the robustness of models to subpopulation shift.
We leverage the class structure underlying existing datasets to control the data subpopulations that comprise the training and test distributions.
Applying this methodology to the ImageNet dataset, we create a suite of subpopulation shift benchmarks of varying granularity.
arXiv Detail & Related papers (2020-08-11T17:04:47Z) - Benchmarking Network Embedding Models for Link Prediction: Are We Making
Progress? [84.43405961569256]
We shed light on the state-of-the-art of network embedding methods for link prediction.
We show, using a consistent evaluation pipeline, that only thin progress has been made over the last years.
We argue that standardized evaluation tools can repair this situation and boost future progress in this field.
arXiv Detail & Related papers (2020-02-25T16:59:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.