Automatic Bottom-Up Taxonomy Construction: A Software Application Domain Study
- URL: http://arxiv.org/abs/2409.15881v1
- Date: Tue, 24 Sep 2024 08:55:07 GMT
- Title: Automatic Bottom-Up Taxonomy Construction: A Software Application Domain Study
- Authors: Cezar Sas, Andrea Capiluppi,
- Abstract summary: Previous research in software application domain classification has faced challenges due to the lack of a proper taxonomy.
This study aims to develop a comprehensive software application domain taxonomy by integrating multiple datasources and leveraging ensemble methods.
- Score: 6.0158981171030685
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Previous research in software application domain classification has faced challenges due to the lack of a proper taxonomy that explicitly models relations between classes. As a result, current solutions are less effective for real-world usage. This study aims to develop a comprehensive software application domain taxonomy by integrating multiple datasources and leveraging ensemble methods. The goal is to overcome the limitations of individual sources and configurations by creating a more robust, accurate, and reproducible taxonomy. This study employs a quantitative research design involving three different datasources: an existing Computer Science Ontology (CSO), Wikidata, and LLMs. The study utilises a combination of automated and human evaluations to assess the quality of a taxonomy. The outcome measures include the number of unlinked terms, self-loops, and overall connectivity of the taxonomy. The results indicate that individual datasources have advantages and drawbacks: the CSO datasource showed minimal variance across different configurations, but a notable issue of missing technical terms and a high number of self-loops. The Wikipedia datasource required significant filtering during construction to improve metric performance. LLM-generated taxonomies demonstrated better performance when using context-rich prompts. An ensemble approach showed the most promise, successfully reducing the number of unlinked terms and self-loops, thus creating a more connected and comprehensive taxonomy. The study addresses the construction of a software application domain taxonomy relying on pre-existing resources. Our results indicate that an ensemble approach to taxonomy construction can effectively address the limitations of individual datasources. Future work should focus on refining the ensemble techniques and exploring additional datasources to enhance the taxonomy's accuracy and completeness.
Related papers
- Refining Wikidata Taxonomy using Large Language Models [2.392329079182226]
We present WiKC, a new version of Wikidata taxonomy cleaned automatically using a combination of Large Language Models (LLMs) and graph mining techniques.
Operations on the taxonomy, such as cutting links or merging classes, are performed with the help of zero-shot prompting on an open-source LLM.
arXiv Detail & Related papers (2024-09-06T06:53:45Z) - FLAME: Self-Supervised Low-Resource Taxonomy Expansion using Large
Language Models [19.863010475923414]
Taxonomies find utility in various real-world applications, such as e-commerce search engines and recommendation systems.
Traditional supervised taxonomy expansion approaches encounter difficulties stemming from limited resources.
We propose FLAME, a novel approach for taxonomy expansion in low-resource environments by harnessing the capabilities of large language models.
arXiv Detail & Related papers (2024-02-21T08:50:40Z) - Query of CC: Unearthing Large Scale Domain-Specific Knowledge from
Public Corpora [104.16648246740543]
We propose an efficient data collection method based on large language models.
The method bootstraps seed information through a large language model and retrieves related data from public corpora.
It not only collects knowledge-related data for specific domains but unearths the data with potential reasoning procedures.
arXiv Detail & Related papers (2024-01-26T03:38:23Z) - Data Optimization in Deep Learning: A Survey [3.1274367448459253]
This study aims to organize a wide range of existing data optimization methodologies for deep learning.
The constructed taxonomy considers the diversity of split dimensions, and deep sub-taxonomies are constructed for each dimension.
The constructed taxonomy and the revealed connections will enlighten the better understanding of existing methods and the design of novel data optimization techniques.
arXiv Detail & Related papers (2023-10-25T09:33:57Z) - Prompting or Fine-tuning? A Comparative Study of Large Language Models
for Taxonomy Construction [0.8670827427401335]
We present a general framework for taxonomy construction that takes into account structural constraints.
We compare the prompting and fine-tuning approaches performed on a hypernym taxonomy and a novel computer science taxonomy dataset.
arXiv Detail & Related papers (2023-09-04T16:53:17Z) - TaxoEnrich: Self-Supervised Taxonomy Completion via Structure-Semantic
Representations [28.65753036636082]
We propose a new taxonomy completion framework, which effectively leverages both semantic features and structural information in the existing taxonomy.
TaxoEnrich consists of four components: (1) taxonomy-contextualized embedding which incorporates both semantic meanings of concept and taxonomic relations based on powerful pretrained language models; (2) a taxonomy-aware sequential encoder which learns candidate position representations by encoding the structural information of taxonomy.
Experiments on four large real-world datasets from different domains show that TaxoEnrich achieves the best performance among all evaluation metrics and outperforms previous state-of-the-art by a large margin.
arXiv Detail & Related papers (2022-02-10T08:10:43Z) - Taxonomy Enrichment with Text and Graph Vector Representations [61.814256012166794]
We address the problem of taxonomy enrichment which aims at adding new words to the existing taxonomy.
We present a new method that allows achieving high results on this task with little effort.
We achieve state-of-the-art results across different datasets and provide an in-depth error analysis of mistakes.
arXiv Detail & Related papers (2022-01-21T09:01:12Z) - Studying Taxonomy Enrichment on Diachronic WordNet Versions [70.27072729280528]
We explore the possibilities of taxonomy extension in a resource-poor setting and present methods which are applicable to a large number of languages.
We create novel English and Russian datasets for training and evaluating taxonomy enrichment models and describe a technique of creating such datasets for other languages.
arXiv Detail & Related papers (2020-11-23T16:49:37Z) - Octet: Online Catalog Taxonomy Enrichment with Self-Supervision [67.26804972901952]
We present a self-supervised end-to-end framework, Octet for Online Catalog EnrichmenT.
We propose to train a sequence labeling model for term extraction and employ graph neural networks (GNNs) to capture the taxonomy structure.
Octet enriches an online catalog in production to 2 times larger in the open-world evaluation.
arXiv Detail & Related papers (2020-06-18T04:53:07Z) - STEAM: Self-Supervised Taxonomy Expansion with Mini-Paths [53.45704816829921]
We propose a self-supervised taxonomy expansion model named STEAM.
STEAM generates natural self-supervision signals, and formulates a node attachment prediction task.
Experiments show STEAM outperforms state-of-the-art methods for taxonomy expansion by 11.6% in accuracy and 7.0% in mean reciprocal rank.
arXiv Detail & Related papers (2020-06-18T00:32:53Z) - TaxoExpan: Self-supervised Taxonomy Expansion with Position-Enhanced
Graph Neural Network [62.12557274257303]
Taxonomies consist of machine-interpretable semantics and provide valuable knowledge for many web applications.
We propose a novel self-supervised framework, named TaxoExpan, which automatically generates a set of query concept, anchor concept> pairs from the existing taxonomy as training data.
We develop two innovative techniques in TaxoExpan: (1) a position-enhanced graph neural network that encodes the local structure of an anchor concept in the existing taxonomy, and (2) a noise-robust training objective that enables the learned model to be insensitive to the label noise in the self-supervision data.
arXiv Detail & Related papers (2020-01-26T21:30:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.