Missing Data Infill with Automunge
- URL: http://arxiv.org/abs/2202.09484v1
- Date: Sat, 19 Feb 2022 00:49:30 GMT
- Title: Missing Data Infill with Automunge
- Authors: Nicholas J.Teague
- Abstract summary: Missing data is a fundamental obstacle in the practice of data science.
This paper surveys a few conventions for imputation as available in the Automunge open source python library platform.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Missing data is a fundamental obstacle in the practice of data science. This
paper surveys a few conventions for imputation as available in the Automunge
open source python library platform for tabular data preprocessing, including
"ML infill" in which auto ML models are trained for target features from
partitioned extracts of a training set. A series of validation experiments were
performed to benchmark imputation scenarios towards downstream model
performance, in which it was found for the given benchmark sets that in many
cases ML infill outperformed for both numeric and categoric target features,
and was otherwise at minimum within noise distributions of the other imputation
scenarios. Evidence also suggested supplementing ML infill with the addition of
support columns with boolean integer markers signaling presence of infill was
usually beneficial to downstream model performance. We consider these results
sufficient to recommend defaulting to ML infill for tabular learning, and
further recommend supplementing imputations with support columns signaling
presence of infill, each as can be prepared with push-button operation in the
Automunge library. Our contributions include an auto ML derived missing data
imputation library for tabular learning in the python ecosystem, fully
integrated into a preprocessing platform with an extensive library of feature
transformations, with a novel production friendly implementation that bases
imputation models on a designated train set for consistent basis towards
additional data.
Related papers
- LML-DAP: Language Model Learning a Dataset for Data-Augmented Prediction [0.0]
This paper introduces a new approach to using Large Language Models (LLMs) for classification tasks in an explainable way.
The proposed method uses the words "Act as an Explainable Machine Learning Model" in the prompt to enhance the interpretability of the predictions.
In some test cases, the system scored an accuracy above 90%, proving the effectiveness of the system.
arXiv Detail & Related papers (2024-09-27T17:58:50Z) - Training on the Benchmark Is Not All You Need [52.01920740114261]
We propose a simple and effective data leakage detection method based on the contents of multiple-choice options.
Our method is able to work under black-box conditions without access to model training data or weights.
We evaluate the degree of data leakage of 31 mainstream open-source LLMs on four benchmark datasets.
arXiv Detail & Related papers (2024-09-03T11:09:44Z) - Julearn: an easy-to-use library for leakage-free evaluation and
inspection of ML models [0.23301643766310373]
We present the rationale behind julearn's design, its core features, and showcase three examples of previously-published research projects.
Julearn aims to simplify the entry into the machine learning world by providing an easy-to-use environment with built in guards against some of the most common ML pitfalls.
arXiv Detail & Related papers (2023-10-19T08:21:12Z) - Retrieval-Based Transformer for Table Augmentation [14.460363647772745]
We introduce a novel approach toward automatic data wrangling.
We aim to address table augmentation tasks, including row/column population and data imputation.
Our model consistently and substantially outperforms both supervised statistical methods and the current state-of-the-art transformer-based models.
arXiv Detail & Related papers (2023-06-20T18:51:21Z) - Numeracy from Literacy: Data Science as an Emergent Skill from Large
Language Models [0.0]
Large language models (LLM) such as OpenAI's ChatGPT and GPT-3 offer unique testbeds for exploring the translation challenges of turning literacy into numeracy.
Previous publicly-available transformer models from eighteen months prior and 1000 times smaller failed to provide basic arithmetic.
This work examines whether next-token prediction succeeds from sentence completion into the realm of actual numerical understanding.
arXiv Detail & Related papers (2023-01-31T03:14:57Z) - Leveraging Instance Features for Label Aggregation in Programmatic Weak
Supervision [75.1860418333995]
Programmatic Weak Supervision (PWS) has emerged as a widespread paradigm to synthesize training labels efficiently.
The core component of PWS is the label model, which infers true labels by aggregating the outputs of multiple noisy supervision sources as labeling functions.
Existing statistical label models typically rely only on the outputs of LF, ignoring the instance features when modeling the underlying generative process.
arXiv Detail & Related papers (2022-10-06T07:28:53Z) - Data Debugging with Shapley Importance over End-to-End Machine Learning
Pipelines [27.461398584509755]
DataScope is the first system that efficiently computes Shapley values of training examples over an end-to-end machine learning pipeline.
Our results show that DataScope is up to four orders of magnitude faster than state-of-the-art Monte Carlo-based methods.
arXiv Detail & Related papers (2022-04-23T19:29:23Z) - Learning Summary Statistics for Bayesian Inference with Autoencoders [58.720142291102135]
We use the inner dimension of deep neural network based Autoencoders as summary statistics.
To create an incentive for the encoder to encode all the parameter-related information but not the noise, we give the decoder access to explicit or implicit information that has been used to generate the training data.
arXiv Detail & Related papers (2022-01-28T12:00:31Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Combining Feature and Instance Attribution to Detect Artifacts [62.63504976810927]
We propose methods to facilitate identification of training data artifacts.
We show that this proposed training-feature attribution approach can be used to uncover artifacts in training data.
We execute a small user study to evaluate whether these methods are useful to NLP researchers in practice.
arXiv Detail & Related papers (2021-07-01T09:26:13Z) - Multi-layer Optimizations for End-to-End Data Analytics [71.05611866288196]
We introduce Iterative Functional Aggregate Queries (IFAQ), a framework that realizes an alternative approach.
IFAQ treats the feature extraction query and the learning task as one program given in the IFAQ's domain-specific language.
We show that a Scala implementation of IFAQ can outperform mlpack, Scikit, and specialization by several orders of magnitude for linear regression and regression tree models over several relational datasets.
arXiv Detail & Related papers (2020-01-10T16:14:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.