Variational Item Response Theory: Fast, Accurate, and Expressive
- URL: http://arxiv.org/abs/2002.00276v2
- Date: Mon, 16 Mar 2020 17:19:23 GMT
- Title: Variational Item Response Theory: Fast, Accurate, and Expressive
- Authors: Mike Wu, Richard L. Davis, Benjamin W. Domingue, Chris Piech, Noah
Goodman
- Abstract summary: Item Response Theory (IRT) is a ubiquitous model for understanding humans based on their responses to questions.
We introduce a variational Bayesian inference algorithm for IRT, and show that it is fast and scaleable without sacrificing accuracy.
Applying this method to five large-scale item response datasets from cognitive science and education yields higher log likelihoods and improvements in imputing missing data.
- Score: 11.927952652448285
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Item Response Theory (IRT) is a ubiquitous model for understanding humans
based on their responses to questions, used in fields as diverse as education,
medicine and psychology. Large modern datasets offer opportunities to capture
more nuances in human behavior, potentially improving test scoring and better
informing public policy. Yet larger datasets pose a difficult speed / accuracy
challenge to contemporary algorithms for fitting IRT models. We introduce a
variational Bayesian inference algorithm for IRT, and show that it is fast and
scaleable without sacrificing accuracy. Using this inference approach we then
extend classic IRT with expressive Bayesian models of responses. Applying this
method to five large-scale item response datasets from cognitive science and
education yields higher log likelihoods and improvements in imputing missing
data. The algorithm implementation is open-source, and easily usable.
Related papers
- Adapting Vision-Language Models to Open Classes via Test-Time Prompt Tuning [50.26965628047682]
Adapting pre-trained models to open classes is a challenging problem in machine learning.
In this paper, we consider combining the advantages of both and come up with a test-time prompt tuning approach.
Our proposed method outperforms all comparison methods on average considering both base and new classes.
arXiv Detail & Related papers (2024-08-29T12:34:01Z) - Zero-shot Retrieval: Augmenting Pre-trained Models with Search Engines [83.65380507372483]
Large pre-trained models can dramatically reduce the amount of task-specific data required to solve a problem, but they often fail to capture domain-specific nuances out of the box.
This paper shows how to leverage recent advances in NLP and multi-modal learning to augment a pre-trained model with search engine retrieval.
arXiv Detail & Related papers (2023-11-29T05:33:28Z) - R-Tuning: Instructing Large Language Models to Say `I Don't Know' [66.11375475253007]
Large language models (LLMs) have revolutionized numerous domains with their impressive performance but still face their challenges.
Previous instruction tuning methods force the model to complete a sentence no matter whether the model knows the knowledge or not.
We present a new approach called Refusal-Aware Instruction Tuning (R-Tuning)
Experimental results demonstrate R-Tuning effectively improves a model's ability to answer known questions and refrain from answering unknown questions.
arXiv Detail & Related papers (2023-11-16T08:45:44Z) - RPLKG: Robust Prompt Learning with Knowledge Graph [11.893917358053004]
We propose a new method, robust prompt learning with knowledge graph (RPLKG)
Based on the knowledge graph, we automatically design diverse interpretable and meaningful prompt sets.
RPLKG shows a significant performance improvement compared to zero-shot learning.
arXiv Detail & Related papers (2023-04-21T08:22:58Z) - Variational Information Pursuit for Interpretable Predictions [8.894670614193677]
Variational Information Pursuit (V-IP) is a variational characterization of IP which bypasses the need for learning generative models.
V-IP finds much shorter query chains when compared to reinforcement learning which is typically used in sequential-decision-making problems.
We demonstrate the utility of V-IP on challenging tasks like medical diagnosis where the performance is far superior to the generative modelling approach.
arXiv Detail & Related papers (2023-02-06T15:43:48Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Adaptive Learning for Discovery [18.754931451237375]
We study a sequential decision-making problem, called Adaptive Sampling for Discovery (ASD)
ASD algorithms adaptively label the points with the goal to maximize the sum of responses.
This problem has wide applications to real-world discovery problems, for example drug discovery with the help of machine learning models.
arXiv Detail & Related papers (2022-05-30T03:30:45Z) - Deep invariant networks with differentiable augmentation layers [87.22033101185201]
Methods for learning data augmentation policies require held-out data and are based on bilevel optimization problems.
We show that our approach is easier and faster to train than modern automatic data augmentation techniques.
arXiv Detail & Related papers (2022-02-04T14:12:31Z) - Efficient Nearest Neighbor Language Models [114.40866461741795]
Non-parametric neural language models (NLMs) learn predictive distributions of text utilizing an external datastore.
We show how to achieve up to a 6x speed-up in inference speed while retaining comparable performance.
arXiv Detail & Related papers (2021-09-09T12:32:28Z) - Modeling Item Response Theory with Stochastic Variational Inference [8.369065078321215]
We introduce a variational Bayesian inference algorithm for Item Response Theory (IRT)
Applying this method to five large-scale item response datasets yields higher log likelihoods and higher accuracy in imputing missing data.
The algorithm implementation is open-source, and easily usable.
arXiv Detail & Related papers (2021-08-26T05:00:27Z) - Mitigating Temporal-Drift: A Simple Approach to Keep NER Models Crisp [16.960138447997007]
Performance of neural models for named entity recognition degrades over time, becoming stale.
We propose an intuitive approach to measure the potential trendiness of tweets and use this metric to select the most informative instances to use for training.
Our approach shows larger increases in prediction accuracy with less training data than the alternatives, making it an attractive, practical solution.
arXiv Detail & Related papers (2021-04-20T03:35:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.