Domain specific ontologies from Linked Open Data (LOD)
- URL: http://arxiv.org/abs/2505.22550v1
- Date: Wed, 28 May 2025 16:33:01 GMT
- Title: Domain specific ontologies from Linked Open Data (LOD)
- Authors: Rosario Uceda-Sosa, Nandana Mihindukulasooriya, Atul Kumar, Sahil Bansal, Seema Nagar,
- Abstract summary: Domain specific knowledge may make it more efficient to consume the knowledge and easier to extend with proprietary content.<n>We discuss our experience for IT with a domain-agnostic pipeline, and extending it using domain-specific glossaries.
- Score: 6.664338208823287
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Logical and probabilistic reasoning tasks that require a deeper knowledge of semantics are increasingly relying on general purpose ontologies such as Wikidata and DBpedia. However, tasks such as entity disambiguation and linking may benefit from domain specific knowledge graphs, which make it more efficient to consume the knowledge and easier to extend with proprietary content. We discuss our experience bootstrapping one such ontology for IT with a domain-agnostic pipeline, and extending it using domain-specific glossaries.
Related papers
- STRUCTSENSE: A Task-Agnostic Agentic Framework for Structured Information Extraction with Human-In-The-Loop Evaluation and Benchmarking [2.355572228890207]
StructSense is a modular, task-agnostic, open-source framework for structured information extraction built on Large Language Models.<n>It is guided by domain-specific symbolic knowledge enabling it encoded complex domain content effectively.<n>We demonstrate that StructSense can overcome both the limitations of domain sensitivity and the lack of cross-task generalizability.
arXiv Detail & Related papers (2025-07-04T15:51:07Z) - Unsupervised Named Entity Disambiguation for Low Resource Domains [0.4297070083645049]
We present an unsupervised approach leveraging the concept of Group Steiner Trees ( GST)<n> GST can identify the most relevant candidates for entity disambiguation using the contextual similarities across candidate entities.<n>We outperform the state-of-the-art unsupervised methods by more than 40% (in avg.) in terms of Precision@1 across various domain-specific datasets.
arXiv Detail & Related papers (2024-12-13T11:35:00Z) - The Ontoverse: Democratising Access to Knowledge Graph-based Data Through a Cartographic Interface [33.861478826378054]
We have developed a unique approach to data navigation that leans on geographical visualisation and hierarchically structured domain knowledge.
Our approach uses natural language processing techniques to extract named entities from the underlying data and normalise them against relevant semantic domain references and navigational structures.
This allows end-users to identify entities relevant to their needs and access extensive graph analytics.
arXiv Detail & Related papers (2024-07-22T10:29:25Z) - Unearthing Large Scale Domain-Specific Knowledge from Public Corpora [103.0865116794534]
We introduce large models into the data collection pipeline to guide the generation of domain-specific information.<n>We refer to this approach as Retrieve-from-CC.<n>It not only collects data related to domain-specific knowledge but also mines the data containing potential reasoning procedures from the public corpus.
arXiv Detail & Related papers (2024-01-26T03:38:23Z) - Domain Prompt Learning with Quaternion Networks [49.45309818782329]
We propose to leverage domain-specific knowledge from domain-specific foundation models to transfer the robust recognition ability of Vision-Language Models to specialized domains.
We present a hierarchical approach that generates vision prompt features by analyzing intermodal relationships between hierarchical language prompt features and domain-specific vision features.
Our proposed method achieves new state-of-the-art results in prompt learning.
arXiv Detail & Related papers (2023-12-12T08:49:39Z) - Knowledge Plugins: Enhancing Large Language Models for Domain-Specific
Recommendations [50.81844184210381]
We propose a general paradigm that augments large language models with DOmain-specific KnowledgE to enhance their performance on practical applications, namely DOKE.
This paradigm relies on a domain knowledge extractor, working in three steps: 1) preparing effective knowledge for the task; 2) selecting the knowledge for each specific sample; and 3) expressing the knowledge in an LLM-understandable way.
arXiv Detail & Related papers (2023-11-16T07:09:38Z) - DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain
Question Answering over Knowledge Base and Text [73.68051228972024]
Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when relying on their internal knowledge.
Retrieval-augmented LLMs have emerged as a potential solution to ground LLMs in external knowledge.
arXiv Detail & Related papers (2023-10-31T04:37:57Z) - DiscoverPath: A Knowledge Refinement and Retrieval System for
Interdisciplinarity on Biomedical Research [96.10765714077208]
Traditional keyword-based search engines fall short in assisting users who may not be familiar with specific terminologies.
We present a knowledge graph-based paper search engine for biomedical research to enhance the user experience.
The system, dubbed DiscoverPath, employs Named Entity Recognition (NER) and part-of-speech (POS) tagging to extract terminologies and relationships from article abstracts to create a KG.
arXiv Detail & Related papers (2023-09-04T20:52:33Z) - Reorganizing Educational Institutional Domain using Faceted Ontological
Principles [0.0]
This work is to find out how different library classification systems and linguistic techniques arrange a particular domain of interest.
We use knowledge representation and languages for a specific domain specific ontology.
This construction would help not only in problem solving, but it would demonstrate the ease with which complex queries can be handled.
arXiv Detail & Related papers (2023-06-17T09:06:07Z) - Knowledge Graph Anchored Information-Extraction for Domain-Specific
Insights [1.6308268213252761]
We use a task-based approach for fulfilling specific information needs within a new domain.
A pipeline constructed of state of the art NLP technologies is used to automatically extract an instance level semantic structure.
arXiv Detail & Related papers (2021-04-18T19:28:10Z) - Open Domain Generalization with Domain-Augmented Meta-Learning [83.59952915761141]
We study a novel and practical problem of Open Domain Generalization (OpenDG)
We propose a Domain-Augmented Meta-Learning framework to learn open-domain generalizable representations.
Experiment results on various multi-domain datasets demonstrate that the proposed Domain-Augmented Meta-Learning (DAML) outperforms prior methods for unseen domain recognition.
arXiv Detail & Related papers (2021-04-08T09:12:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.