Exploring LLM Capabilities in Extracting DCAT-Compatible Metadata for Data Cataloging
- URL: http://arxiv.org/abs/2507.05282v1
- Date: Fri, 04 Jul 2025 10:49:37 GMT
- Title: Exploring LLM Capabilities in Extracting DCAT-Compatible Metadata for Data Cataloging
- Authors: Lennart Busch, Daniel Tebernum, Gissel Velarde,
- Abstract summary: Data catalogs can support and accelerate data exploration by using metadata to answer user queries.<n>This study investigates whether LLMs can automate metadata maintenance of text-based data and generate high-quality DCAT-compatible metadata.<n>Our results show that LLMs can generate metadata comparable to human-created content, particularly on tasks that require advanced semantic understanding.
- Score: 0.1424853531377145
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efficient data exploration is crucial as data becomes increasingly important for accelerating processes, improving forecasts and developing new business models. Data consumers often spend 25-98 % of their time searching for suitable data due to the exponential growth, heterogeneity and distribution of data. Data catalogs can support and accelerate data exploration by using metadata to answer user queries. However, as metadata creation and maintenance is often a manual process, it is time-consuming and requires expertise. This study investigates whether LLMs can automate metadata maintenance of text-based data and generate high-quality DCAT-compatible metadata. We tested zero-shot and few-shot prompting strategies with LLMs from different vendors for generating metadata such as titles and keywords, along with a fine-tuned model for classification. Our results show that LLMs can generate metadata comparable to human-created content, particularly on tasks that require advanced semantic understanding. Larger models outperformed smaller ones, and fine-tuning significantly improves classification accuracy, while few-shot prompting yields better results in most cases. Although LLMs offer a faster and reliable way to create metadata, a successful application requires careful consideration of task-specific criteria and domain context.
Related papers
- A Survey of LLM $\times$ DATA [71.96808497574658]
The integration of large language model (LLM) and data management ( DATA4LLM) is rapidly redefining both domains.<n>On the one hand, DATA4LLM feeds LLMs with high quality, diversity, and timeliness of data required for stages like pre-training, post-training, retrieval-augmented generation, and agentic.<n>On the other hand, LLMs are emerging as general-purpose engines for data management.
arXiv Detail & Related papers (2025-05-24T01:57:12Z) - LLMs as Data Annotators: How Close Are We to Human Performance [47.61698665650761]
Manual annotation of data is labor-intensive, time-consuming, and costly.<n>In-context learning (ICL) in which some examples related to the task are given in the prompt can lead to inefficiencies and suboptimal model performance.<n>This paper presents experiments comparing several LLMs, considering different embedding models, across various datasets for the Named Entity Recognition (NER) task.
arXiv Detail & Related papers (2025-04-21T11:11:07Z) - Augmented Relevance Datasets with Fine-Tuned Small LLMs [0.7022492404644501]
This paper explores the use of small, fine-tuned large language models (LLMs) to automate relevance assessment.<n>We fine-tuned small LLMs to enhance relevance assessments, thereby improving dataset creation quality for downstream ranking model training.
arXiv Detail & Related papers (2025-04-14T02:35:00Z) - Leveraging Retrieval Augmented Generative LLMs For Automated Metadata Description Generation to Enhance Data Catalogs [1.1957520154275776]
Data catalogs serve as repositories for organizing and accessing diverse collection of data assets.<n>Many data catalogs within organizations suffer from limited searchability due to inadequate metadata like asset descriptions.<n>This paper explores the challenges associated with metadata creation and proposes a unique prompt enrichment idea of leveraging existing metadata content.
arXiv Detail & Related papers (2025-03-12T02:33:33Z) - Web-Scale Visual Entity Recognition: An LLM-Driven Data Approach [56.55633052479446]
Web-scale visual entity recognition presents significant challenges due to the lack of clean, large-scale training data.
We propose a novel methodology to curate such a dataset, leveraging a multimodal large language model (LLM) for label verification, metadata generation, and rationale explanation.
Experiments demonstrate that models trained on this automatically curated data achieve state-of-the-art performance on web-scale visual entity recognition tasks.
arXiv Detail & Related papers (2024-10-31T06:55:24Z) - Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration [90.41908331897639]
Large language models (LLMs) have significantly benefited from training on diverse, high-quality task-specific data.
We present a novel approach, ReverseGen, designed to automatically generate effective training samples.
arXiv Detail & Related papers (2024-10-22T06:43:28Z) - SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning [70.21358720599821]
Large language models (LLMs) hold the promise of solving diverse tasks when provided with appropriate natural language prompts.
We propose SELF-GUIDE, a multi-stage mechanism in which we synthesize task-specific input-output pairs from the student LLM.
We report an absolute improvement of approximately 15% for classification tasks and 18% for generation tasks in the benchmark's metrics.
arXiv Detail & Related papers (2024-07-16T04:41:58Z) - Utilising a Large Language Model to Annotate Subject Metadata: A Case
Study in an Australian National Research Data Catalogue [18.325675189960833]
In support of open and reproducible research, there has been a rapidly increasing number of datasets made available for research.
As the availability of datasets increases, it becomes more important to have quality metadata for discovering and reusing them.
This paper proposes to leverage large language models (LLMs) for cost-effective annotation of subject metadata through the LLM-based in-context learning.
arXiv Detail & Related papers (2023-10-17T14:52:33Z) - MLLM-DataEngine: An Iterative Refinement Approach for MLLM [62.30753425449056]
We propose a novel closed-loop system that bridges data generation, model training, and evaluation.
Within each loop, the MLLM-DataEngine first analyze the weakness of the model based on the evaluation results.
For targeting, we propose an Adaptive Bad-case Sampling module, which adjusts the ratio of different types of data.
For quality, we resort to GPT-4 to generate high-quality data with each given data type.
arXiv Detail & Related papers (2023-08-25T01:41:04Z) - Data Race Detection Using Large Language Models [1.0013600887991827]
Large language models (LLMs) are an alternative strategy to facilitate analyses and optimizations of high-performance computing programs.
In this paper, we explore a novel LLM-based data race detection approach combining prompting engineering and fine-tuning techniques.
arXiv Detail & Related papers (2023-08-15T00:08:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.