Lost in the Pipeline: How Well Do Large Language Models Handle Data Preparation?
- URL: http://arxiv.org/abs/2511.21708v1
- Date: Mon, 17 Nov 2025 19:06:22 GMT
- Title: Lost in the Pipeline: How Well Do Large Language Models Handle Data Preparation?
- Authors: Matteo Spreafico, Ludovica Tassini, Camilla Sancricca, Cinzia Cappiello,
- Abstract summary: This paper investigates whether large language models can effectively support users in selecting and automating data preparation tasks.<n>We prompted these models with poor-quality datasets and measured their ability to perform tasks such as data profiling and cleaning.<n>To evaluate the capabilities of large language models, we developed a custom-designed quality model that has been validated through a user study.
- Score: 0.7291396653006809
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Large language models have recently demonstrated their exceptional capabilities in supporting and automating various tasks. Among the tasks worth exploring for testing large language model capabilities, we considered data preparation, a critical yet often labor-intensive step in data-driven processes. This paper investigates whether large language models can effectively support users in selecting and automating data preparation tasks. To this aim, we considered both general-purpose and fine-tuned tabular large language models. We prompted these models with poor-quality datasets and measured their ability to perform tasks such as data profiling and cleaning. We also compare the support provided by large language models with that offered by traditional data preparation tools. To evaluate the capabilities of large language models, we developed a custom-designed quality model that has been validated through a user study to gain insights into practitioners' expectations.
Related papers
- Large Language Models in Software Documentation and Modeling: A Literature Review and Findings [0.0]
We conduct a literature review on the usage of large language models for software engineering tasks related to documentation and modeling.<n>We analyze articles from four major venues in the area, organize them per tasks they solve, and provide an overview of used prompt techniques, metrics, approaches to human-based evaluation, and major datasets.
arXiv Detail & Related papers (2026-02-04T16:21:50Z) - Evolution without Large Models: Training Language Model with Task Principles [52.44569608690695]
A common training approach for language models involves using a large-scale language model to expand a human-provided dataset.<n>This method significantly reduces training costs by eliminating the need for extensive human data annotation.<n>However, it still faces challenges such as high carbon emissions during data augmentation and the risk of data leakage.
arXiv Detail & Related papers (2025-07-08T13:52:45Z) - Teaching a Language Model to Speak the Language of Tools [0.0]
This work presents a methodology for adapting existing language models to enable robust tool use in any target language.<n>The research introduces TUCAN, which achieves up to 28.75% improvement in function-calling accuracy over base models.
arXiv Detail & Related papers (2025-06-29T20:47:27Z) - Large Language Models in the Task of Automatic Validation of Text Classifier Predictions [45.88028371034407]
Machine learning models for text classification are trained to predict a class for a given text.<n>To do this, training and validation samples must be prepared, and each text is assigned a class.<n>Human annotators are usually assigned by human annotators with different expertise levels, depending on the specific classification task.<n>This paper proposes several approaches to replace human annotators with Large Language Models.
arXiv Detail & Related papers (2025-05-24T13:19:03Z) - Personalized Multimodal Large Language Models: A Survey [127.9521218125761]
Multimodal Large Language Models (MLLMs) have become increasingly important due to their state-of-the-art performance and ability to integrate multiple data modalities.<n>This paper presents a comprehensive survey on personalized multimodal large language models, focusing on their architecture, training methods, and applications.
arXiv Detail & Related papers (2024-12-03T03:59:03Z) - Large corpora and large language models: a replicable method for automating grammatical annotation [0.0]
We introduce a methodological pipeline applied to the case study of formal variation in the English evaluative verb construction 'consider X (as) (to be) Y'<n>We reach a model accuracy of over 90% on our held-out test samples with only a small amount of training data.<n>We discuss the generalisability of our results for a wider range of case studies of grammatical constructions and grammatical variation and change.
arXiv Detail & Related papers (2024-11-18T03:29:48Z) - InkubaLM: A small language model for low-resource African languages [9.426968756845389]
InkubaLM is a small language model with 0.4 billion parameters.
It achieves performance comparable to models with significantly larger parameter counts.
It demonstrates remarkable consistency across multiple languages.
arXiv Detail & Related papers (2024-08-30T05:42:31Z) - Language Modeling on Tabular Data: A Survey of Foundations, Techniques and Evolution [7.681258910515419]
Tabular data presents unique challenges due to its heterogeneous nature and complex structural relationships.
High predictive performance and robustness in tabular data analysis holds significant promise for numerous applications.
The recent advent of large language models, such as GPT and LLaMA, has further revolutionized the field, facilitating more advanced and diverse applications with minimal fine-tuning.
arXiv Detail & Related papers (2024-08-20T04:59:19Z) - Software Model Evolution with Large Language Models: Experiments on Simulated, Public, and Industrial Datasets [6.585732390922304]
We evaluate the potential of large language models for model completion with RAMC.<n>We found that large language models are indeed a promising technology for supporting software model evolution.<n>The general inference capabilities of large language models are particularly useful when dealing with concepts for which there are few, noisy, or no examples at all.
arXiv Detail & Related papers (2024-06-25T15:43:20Z) - Evaluating Large Language Models on Controlled Generation Tasks [92.64781370921486]
We present an extensive analysis of various benchmarks including a sentence planning benchmark with different granularities.
After comparing large language models against state-of-the-start finetuned smaller models, we present a spectrum showing large language models falling behind, are comparable, or exceed the ability of smaller models.
arXiv Detail & Related papers (2023-10-23T03:48:24Z) - Parrot Mind: Towards Explaining the Complex Task Reasoning of Pretrained Large Language Models with Template-Content Structure [66.33623392497599]
We show that a structure called template-content structure (T-C structure) can reduce the possible space from exponential level to linear level.
We demonstrate that models can achieve task composition, further reducing the space needed to learn from linear to logarithmic.
arXiv Detail & Related papers (2023-10-09T06:57:45Z) - When Large Language Models Meet Personalization: Perspectives of
Challenges and Opportunities [60.5609416496429]
The capability of large language models has been dramatically improved.
Such a major leap-forward in general AI capacity will change the pattern of how personalization is conducted.
By leveraging large language models as general-purpose interface, personalization systems may compile user requests into plans.
arXiv Detail & Related papers (2023-07-31T02:48:56Z) - PaLM: Scaling Language Modeling with Pathways [180.69584031908113]
We trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM.
We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML system which enables highly efficient training across multiple TPU Pods.
We demonstrate continued benefits of scaling by achieving state-of-the-art few-shot learning results on hundreds of language understanding and generation benchmarks.
arXiv Detail & Related papers (2022-04-05T16:11:45Z) - Comparison of Interactive Knowledge Base Spelling Correction Models for
Low-Resource Languages [81.90356787324481]
Spelling normalization for low resource languages is a challenging task because the patterns are hard to predict.
This work shows a comparison of a neural model and character language models with varying amounts on target language data.
Our usage scenario is interactive correction with nearly zero amounts of training examples, improving models as more data is collected.
arXiv Detail & Related papers (2020-10-20T17:31:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.