Designing Data: Proactive Data Collection and Iteration for Machine
Learning
- URL: http://arxiv.org/abs/2301.10319v2
- Date: Sat, 29 Jul 2023 02:40:16 GMT
- Title: Designing Data: Proactive Data Collection and Iteration for Machine
Learning
- Authors: Aspen Hopkins, Fred Hohman, Luca Zappella, Xavier Suau Cuadros and
Dominik Moritz
- Abstract summary: Lack of diversity in data collection has caused significant failures in machine learning (ML) applications.
New methods to track & manage data collection, iteration, and model training are necessary for evaluating whether datasets reflect real world variability.
- Score: 12.295169687537395
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Lack of diversity in data collection has caused significant failures in
machine learning (ML) applications. While ML developers perform post-collection
interventions, these are time intensive and rarely comprehensive. Thus, new
methods to track & manage data collection, iteration, and model training are
necessary for evaluating whether datasets reflect real world variability. We
present designing data, an iterative approach to data collection connecting HCI
concepts with ML techniques. Our process includes (1) Pre-Collection Planning,
to reflexively prompt and document expected data distributions; (2) Collection
Monitoring, to systematically encourage sampling diversity; and (3) Data
Familiarity, to identify samples that are unfamiliar to a model using density
estimation. We apply designing data to a data collection and modeling task. We
find models trained on ''designed'' datasets generalize better across
intersectional groups than those trained on similarly sized but less targeted
datasets, and that data familiarity is effective for debugging datasets.
Related papers
- Fitting Multiple Machine Learning Models with Performance Based Clustering [8.763425474439552]
Traditional machine learning approaches assume that data comes from a single generating mechanism, which may not hold for most real life data.
We introduce a clustering framework that eliminates this assumption by grouping the data according to the relations between the features and the target values.
We extend our framework to applications having streaming data where we produce outcomes using an ensemble of models.
arXiv Detail & Related papers (2024-11-10T19:38:35Z) - Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration [90.41908331897639]
Large language models (LLMs) have significantly benefited from training on diverse, high-quality task-specific data.
We present a novel approach, ReverseGen, designed to automatically generate effective training samples.
arXiv Detail & Related papers (2024-10-22T06:43:28Z) - A CLIP-Powered Framework for Robust and Generalizable Data Selection [51.46695086779598]
Real-world datasets often contain redundant and noisy data, imposing a negative impact on training efficiency and model performance.
Data selection has shown promise in identifying the most representative samples from the entire dataset.
We propose a novel CLIP-powered data selection framework that leverages multimodal information for more robust and generalizable sample selection.
arXiv Detail & Related papers (2024-10-15T03:00:58Z) - Metadata-based Data Exploration with Retrieval-Augmented Generation for Large Language Models [3.7685718201378746]
This research introduces a new architecture for data exploration which employs a form of Retrieval-Augmented Generation (RAG) to enhance metadata-based data discovery.
The proposed framework offers a new method for evaluating semantic similarity among heterogeneous data sources.
arXiv Detail & Related papers (2024-10-05T17:11:37Z) - Retrieval-Augmented Data Augmentation for Low-Resource Domain Tasks [66.87070857705994]
In low-resource settings, the amount of seed data samples to use for data augmentation is very small.
We propose a novel method that augments training data by incorporating a wealth of examples from other datasets.
This approach can ensure that the generated data is not only relevant but also more diverse than what could be achieved using the limited seed data alone.
arXiv Detail & Related papers (2024-02-21T02:45:46Z) - How to Train Data-Efficient LLMs [56.41105687693619]
We study data-efficient approaches for pre-training language models (LLMs)
We find that Ask-LLM and Density sampling are the best methods in their respective categories.
In our comparison of 19 samplers, involving hundreds of evaluation tasks and pre-training runs, we find that Ask-LLM and Density are the best methods in their respective categories.
arXiv Detail & Related papers (2024-02-15T02:27:57Z) - Optimizing Data Collection for Machine Learning [87.37252958806856]
Modern deep learning systems require huge data sets to achieve impressive performance.
Over-collecting data incurs unnecessary present costs, while under-collecting may incur future costs and delay.
We propose a new paradigm for modeling the data collection as a formal optimal data collection problem.
arXiv Detail & Related papers (2022-10-03T21:19:05Z) - Learning to Count in the Crowd from Limited Labeled Data [109.2954525909007]
We focus on reducing the annotation efforts by learning to count in the crowd from limited number of labeled samples.
Specifically, we propose a Gaussian Process-based iterative learning mechanism that involves estimation of pseudo-ground truth for the unlabeled data.
arXiv Detail & Related papers (2020-07-07T04:17:01Z) - Overcoming Noisy and Irrelevant Data in Federated Learning [13.963024590508038]
Federated learning is an effective way of training a machine learning model in a distributed manner from local data collected by client devices.
We propose a method for distributedly selecting relevant data, where we use a benchmark model trained on a small benchmark dataset.
The effectiveness of our proposed approach is evaluated on multiple real-world image datasets in a simulated system with a large number of clients.
arXiv Detail & Related papers (2020-01-22T22:28:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.