A Benchmark Suite for Evaluating Neural Mutual Information Estimators on Unstructured Datasets
- URL: http://arxiv.org/abs/2410.10924v1
- Date: Mon, 14 Oct 2024 14:22:38 GMT
- Title: A Benchmark Suite for Evaluating Neural Mutual Information Estimators on Unstructured Datasets
- Authors: Kyungeun Lee, Wonjong Rhee,
- Abstract summary: Mutual Information (MI) is a fundamental metric for quantifying dependency between two random variables.
This study introduces a comprehensive benchmark suite for evaluating neural MI estimators on unstructured datasets.
- Score: 3.2228025627337864
- License:
- Abstract: Mutual Information (MI) is a fundamental metric for quantifying dependency between two random variables. When we can access only the samples, but not the underlying distribution functions, we can evaluate MI using sample-based estimators. Assessment of such MI estimators, however, has almost always relied on analytical datasets including Gaussian multivariates. Such datasets allow analytical calculations of the true MI values, but they are limited in that they do not reflect the complexities of real-world datasets. This study introduces a comprehensive benchmark suite for evaluating neural MI estimators on unstructured datasets, specifically focusing on images and texts. By leveraging same-class sampling for positive pairing and introducing a binary symmetric channel trick, we show that we can accurately manipulate true MI values of real-world datasets. Using the benchmark suite, we investigate seven challenging scenarios, shedding light on the reliability of neural MI estimators for unstructured datasets.
Related papers
- Minimally Supervised Learning using Topological Projections in
Self-Organizing Maps [55.31182147885694]
We introduce a semi-supervised learning approach based on topological projections in self-organizing maps (SOMs)
Our proposed method first trains SOMs on unlabeled data and then a minimal number of available labeled data points are assigned to key best matching units (BMU)
Our results indicate that the proposed minimally supervised model significantly outperforms traditional regression techniques.
arXiv Detail & Related papers (2024-01-12T22:51:48Z) - Sparse Bayesian Multidimensional Item Response Theory [0.0]
We develop a Bayesian platform for binary and ordinal item MIRT which requires minimal tuning and scales well on large datasets.
We address the seemingly insurmountable problem of unknown latent factor dimensionality with tools from Bayesian nonparametrics.
Our method reliably recovers both the factor dimensionality as well as the latent structure on high-dimensional synthetic data even for small samples.
arXiv Detail & Related papers (2023-10-26T23:50:50Z) - On the Evaluation and Refinement of Vision-Language Instruction Tuning
Datasets [71.54954966652286]
We try to evaluate the Vision-Language Instruction-Tuning (VLIT) datasets.
We build a new dataset, REVO-LION, by collecting samples with higher SQ from each dataset.
Remarkably, even with only half of the complete data, the model trained on REVO-LION can achieve the performance comparable to simply adding all VLIT datasets up.
arXiv Detail & Related papers (2023-10-10T13:01:38Z) - Revisiting the Evaluation of Image Synthesis with GANs [55.72247435112475]
This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models.
In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
arXiv Detail & Related papers (2023-04-04T17:54:32Z) - Diffeomorphic Information Neural Estimation [2.566492438263125]
Mutual Information (MI) and Conditional Mutual Information (CMI) are multi-purpose tools from information theory.
We introduce DINE (Diffeomorphic Information Neural Estorimator)-a novel approach for estimating CMI of continuous random variables.
We show that the variables of interest can be replaced with appropriate surrogates that follow simpler distributions.
arXiv Detail & Related papers (2022-11-20T03:03:56Z) - A robust estimator of mutual information for deep learning
interpretability [2.574652392763709]
We present GMM-MI, an algorithm that can be applied to both discrete and continuous settings.
We extensively validate GMM-MI on toy data for which the ground truth MI is known.
We then demonstrate the use of our MI estimator in the context of representation learning.
arXiv Detail & Related papers (2022-10-31T18:00:02Z) - Investigating Data Variance in Evaluations of Automatic Machine
Translation Metrics [58.50754318846996]
In this paper, we show that the performances of metrics are sensitive to data.
The ranking of metrics varies when the evaluation is conducted on different datasets.
arXiv Detail & Related papers (2022-03-29T18:58:28Z) - RIFLE: Imputation and Robust Inference from Low Order Marginals [10.082738539201804]
We develop a statistical inference framework for regression and classification in the presence of missing data without imputation.
Our framework, RIFLE, estimates low-order moments of the underlying data distribution with corresponding confidence intervals to learn a distributionally robust model.
Our experiments demonstrate that RIFLE outperforms other benchmark algorithms when the percentage of missing values is high and/or when the number of data points is relatively small.
arXiv Detail & Related papers (2021-09-01T23:17:30Z) - Doing Great at Estimating CATE? On the Neglected Assumptions in
Benchmark Comparisons of Treatment Effect Estimators [91.3755431537592]
We show that even in arguably the simplest setting, estimation under ignorability assumptions can be misleading.
We consider two popular machine learning benchmark datasets for evaluation of heterogeneous treatment effect estimators.
We highlight that the inherent characteristics of the benchmark datasets favor some algorithms over others.
arXiv Detail & Related papers (2021-07-28T13:21:27Z) - CDEvalSumm: An Empirical Study of Cross-Dataset Evaluation for Neural
Summarization Systems [121.78477833009671]
We investigate the performance of different summarization models under a cross-dataset setting.
A comprehensive study of 11 representative summarization systems on 5 datasets from different domains reveals the effect of model architectures and generation ways.
arXiv Detail & Related papers (2020-10-11T02:19:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.