LLMs4OL: Large Language Models for Ontology Learning
- URL: http://arxiv.org/abs/2307.16648v2
- Date: Wed, 2 Aug 2023 07:47:26 GMT
- Title: LLMs4OL: Large Language Models for Ontology Learning
- Authors: Hamed Babaei Giglou and Jennifer D'Souza and S\"oren Auer
- Abstract summary: We propose the LLMs4OL approach, which utilizes Large Language Models (LLMs) for Ontology Learning (OL)
LLMs have shown significant advancements in natural language processing, demonstrating their ability to capture complex language patterns in different knowledge domains.
The evaluations encompass diverse genres of ontological knowledge, including lexicosemantic knowledge in WordNet, geographical knowledge in GeoNames, and medical knowledge in UMLS.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We propose the LLMs4OL approach, which utilizes Large Language Models (LLMs)
for Ontology Learning (OL). LLMs have shown significant advancements in natural
language processing, demonstrating their ability to capture complex language
patterns in different knowledge domains. Our LLMs4OL paradigm investigates the
following hypothesis: \textit{Can LLMs effectively apply their language pattern
capturing capability to OL, which involves automatically extracting and
structuring knowledge from natural language text?} To test this hypothesis, we
conduct a comprehensive evaluation using the zero-shot prompting method. We
evaluate nine different LLM model families for three main OL tasks: term
typing, taxonomy discovery, and extraction of non-taxonomic relations.
Additionally, the evaluations encompass diverse genres of ontological
knowledge, including lexicosemantic knowledge in WordNet, geographical
knowledge in GeoNames, and medical knowledge in UMLS.
Related papers
- Automating Knowledge Discovery from Scientific Literature via LLMs: A Dual-Agent Approach with Progressive Ontology Prompting [59.97247234955861]
We introduce a novel framework based on large language models (LLMs) that combines a progressive prompting algorithm with a dual-agent system, named LLM-Duo.
Our method identifies 2,421 interventions from 64,177 research articles in the speech-language therapy domain.
arXiv Detail & Related papers (2024-08-20T16:42:23Z) - Adapting Multilingual LLMs to Low-Resource Languages with Knowledge Graphs via Adapters [3.7273829129985305]
This paper explores integration of graph knowledge from linguistic into multilingual Large Models (LLMs)
We employ language-specific adapters to improve performance for low-resource languages (LRLs) in sentiment analysis (SA) and named entity recognition (NER)
We assess how structured graph knowledge affects the performance of multilingual LLMs for LRLs in SA and NER.
arXiv Detail & Related papers (2024-07-01T15:56:24Z) - FAC$^2$E: Better Understanding Large Language Model Capabilities by Dissociating Language and Cognition [56.76951887823882]
Large language models (LLMs) are primarily evaluated by overall performance on various text understanding and generation tasks.
We present FAC$2$E, a framework for Fine-grAined and Cognition-grounded LLMs' Capability Evaluation.
arXiv Detail & Related papers (2024-02-29T21:05:37Z) - Language-Specific Neurons: The Key to Multilingual Capabilities in Large Language Models [117.20416338476856]
Large language models (LLMs) demonstrate remarkable multilingual capabilities without being pre-trained on specially curated multilingual parallel corpora.
We propose a novel detection method, language activation probability entropy (LAPE), to identify language-specific neurons within LLMs.
Our findings indicate that LLMs' proficiency in processing a particular language is predominantly due to a small subset of neurons.
arXiv Detail & Related papers (2024-02-26T09:36:05Z) - How Proficient Are Large Language Models in Formal Languages? An In-Depth Insight for Knowledge Base Question Answering [52.86931192259096]
Knowledge Base Question Answering (KBQA) aims to answer natural language questions based on facts in knowledge bases.
Recent works leverage the capabilities of large language models (LLMs) for logical form generation to improve performance.
arXiv Detail & Related papers (2024-01-11T09:27:50Z) - Automating Knowledge Acquisition for Content-Centric Cognitive Agents
Using LLMs [0.0]
The paper describes a system that uses large language model (LLM) technology to support the automatic learning of new entries in an intelligent agent's semantic lexicon.
The process is bootstrapped by an existing non-toy lexicon and a natural language generator that converts formal, ontologically-grounded representations of meaning into natural language sentences.
arXiv Detail & Related papers (2023-12-27T02:31:51Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z) - Establishing Vocabulary Tests as a Benchmark for Evaluating Large
Language Models [2.7013338932521416]
We advocate for the revival of vocabulary tests as a valuable tool for assessing Large Language Models (LLMs) performance.
We evaluate seven LLMs using two vocabulary test formats across two languages and uncover surprising gaps in their lexical knowledge.
arXiv Detail & Related papers (2023-10-23T08:45:12Z) - The Quo Vadis of the Relationship between Language and Large Language
Models [3.10770247120758]
Large Language Models (LLMs) have come to encourage the adoption of LLMs as scientific models of language.
We identify the most important theoretical and empirical risks brought about by the adoption of scientific models that lack transparency.
We conclude that, at their current stage of development, LLMs hardly offer any explanations for language.
arXiv Detail & Related papers (2023-10-17T10:54:24Z) - Shortcut Learning of Large Language Models in Natural Language
Understanding [119.45683008451698]
Large language models (LLMs) have achieved state-of-the-art performance on a series of natural language understanding tasks.
They might rely on dataset bias and artifacts as shortcuts for prediction.
This has significantly affected their generalizability and adversarial robustness.
arXiv Detail & Related papers (2022-08-25T03:51:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.