On the data requirements of probing
- URL: http://arxiv.org/abs/2202.12801v1
- Date: Fri, 25 Feb 2022 16:27:06 GMT
- Title: On the data requirements of probing
- Authors: Zining Zhu, Jixuan Wang, Bai Li, Frank Rudzicz
- Abstract summary: We present a novel method to estimate the required number of data samples for probing datasets.
Our framework helps to systematically construct probing datasets to diagnose neural NLP models.
- Score: 20.965328323152608
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As large and powerful neural language models are developed, researchers have
been increasingly interested in developing diagnostic tools to probe them.
There are many papers with conclusions of the form "observation X is found in
model Y", using their own datasets with varying sizes. Larger probing datasets
bring more reliability, but are also expensive to collect. There is yet to be a
quantitative method for estimating reasonable probing dataset sizes. We tackle
this omission in the context of comparing two probing configurations: after we
have collected a small dataset from a pilot study, how many additional data
samples are sufficient to distinguish two different configurations? We present
a novel method to estimate the required number of data samples in such
experiments and, across several case studies, we verify that our estimations
have sufficient statistical power. Our framework helps to systematically
construct probing datasets to diagnose neural NLP models.
Related papers
- Conditional Generative Models are Sufficient to Sample from Any Causal Effect Estimand [9.460857822923842]
Causal inference from observational data plays critical role in many applications in trustworthy machine learning.
We show how to sample from any identifiable interventional distribution given an arbitrary causal graph.
We also generate high-dimensional interventional samples from the MIMIC-CXR dataset involving text and image variables.
arXiv Detail & Related papers (2024-02-12T05:48:31Z) - Revisiting the Evaluation of Image Synthesis with GANs [55.72247435112475]
This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models.
In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
arXiv Detail & Related papers (2023-04-04T17:54:32Z) - Calibration and generalizability of probabilistic models on low-data
chemical datasets with DIONYSUS [0.0]
We perform an extensive study of the calibration and generalizability of probabilistic machine learning models on small chemical datasets.
We analyse the quality of their predictions and uncertainties in a variety of tasks (binary, regression) and datasets.
We offer practical insights into model and feature choice for modelling small chemical datasets, a common scenario in new chemical experiments.
arXiv Detail & Related papers (2022-12-03T08:19:06Z) - Robustness Analysis of Deep Learning Models for Population Synthesis [5.9106199000537645]
We present bootstrap confidence interval for the deep generative models to evaluate robustness to multiple datasets.
The models are implemented on multiple travel diaries of Montreal Origin- Destination Survey of 2008, 2013, and 2018.
Results show that the predictive errors of CTGAN have narrower confidence intervals indicating its robustness to multiple datasets.
arXiv Detail & Related papers (2022-11-23T22:55:55Z) - Ensemble Machine Learning Model Trained on a New Synthesized Dataset
Generalizes Well for Stress Prediction Using Wearable Devices [3.006016887654771]
We investigate the generalization ability of models built on datasets containing a small number of subjects, recorded in single study protocols.
We propose and evaluate the use of ensemble techniques by combining gradient boosting with an artificial neural network to measure predictive power on new, unseen data.
arXiv Detail & Related papers (2022-09-30T00:20:57Z) - Zero-shot meta-learning for small-scale data from human subjects [10.320654885121346]
We develop a framework to rapidly adapt to a new prediction task with limited training data for out-of-sample test data.
Our model learns the latent treatment effects of each intervention and, by design, can naturally handle multi-task predictions.
Our model has implications for improved generalization of small-size human studies to the wider population.
arXiv Detail & Related papers (2022-03-29T17:42:04Z) - Combining Observational and Randomized Data for Estimating Heterogeneous
Treatment Effects [82.20189909620899]
Estimating heterogeneous treatment effects is an important problem across many domains.
Currently, most existing works rely exclusively on observational data.
We propose to estimate heterogeneous treatment effects by combining large amounts of observational data and small amounts of randomized data.
arXiv Detail & Related papers (2022-02-25T18:59:54Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Statistical model-based evaluation of neural networks [74.10854783437351]
We develop an experimental setup for the evaluation of neural networks (NNs)
The setup helps to benchmark a set of NNs vis-a-vis minimum-mean-square-error (MMSE) performance bounds.
This allows us to test the effects of training data size, data dimension, data geometry, noise, and mismatch between training and testing conditions.
arXiv Detail & Related papers (2020-11-18T00:33:24Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.