DINGO: an ontology for projects and grants linked data
- URL: http://arxiv.org/abs/2006.13438v1
- Date: Wed, 24 Jun 2020 02:47:40 GMT
- Title: DINGO: an ontology for projects and grants linked data
- Authors: Diego Chialva, Alexis-Michel Mugabushaka
- Abstract summary: DINGO provides a framework to model data for semantically-enabled applications relative to projects, funding, actors, and, notably, funding policies in the research landscape.
We discuss its main features, the principles followed for its development, its community uptake, its maintenance and evolution.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present DINGO (Data INtegration for Grants Ontology), an ontology that
provides a machine readable extensible framework to model data for
semantically-enabled applications relative to projects, funding, actors, and,
notably, funding policies in the research landscape. DINGO is designed to yield
high modeling power and elasticity to cope with the huge variety in funding,
research and policy practices, which makes it applicable also to other areas
besides research where funding is an important aspect. We discuss its main
features, the principles followed for its development, its community uptake,
its maintenance and evolution.
Related papers
- A Survey on Post-training of Large Language Models [185.51013463503946]
Large Language Models (LLMs) have fundamentally transformed natural language processing, making them indispensable across domains ranging from conversational systems to scientific exploration.
These challenges necessitate advanced post-training language models (PoLMs) to address shortcomings, such as restricted reasoning capacities, ethical uncertainties, and suboptimal domain-specific performance.
This paper presents the first comprehensive survey of PoLMs, systematically tracing their evolution across five core paradigms.
arXiv Detail & Related papers (2025-03-08T05:41:42Z) - Cracking the Code: Enhancing Development finance understanding with artificial intelligence [0.0]
This research employs a novel approach that combines Machine Learning (ML) techniques, specifically Natural Language Processing (NLP), and an innovative Python topic modeling technique called BERTopic.
By revealing existing yet hidden topics of development finance, this application of artificial intelligence enables a better understanding of donor priorities and overall development funding.
arXiv Detail & Related papers (2025-02-13T17:01:45Z) - Spatio-Temporal Foundation Models: Vision, Challenges, and Opportunities [48.45951497996322]
Foundation models (STFMs) have revolutionized artificial intelligence, setting new benchmarks in performance and enabling transformative capabilities across a wide range of vision and language tasks.
In this paper, we articulate a vision for the future of STFMs, outlining their essential characteristics and generalization capabilities necessary for broad applicability.
We explore potential opportunities and directions to advance research towards the aim of effective and broadly applicable STFMs.
arXiv Detail & Related papers (2025-01-15T08:52:28Z) - Foundation Models for Remote Sensing and Earth Observation: A Survey [101.77425018347557]
This survey systematically reviews the emerging field of Remote Sensing Foundation Models (RSFMs)
It begins with an outline of their motivation and background, followed by an introduction of their foundational concepts.
We benchmark these models against publicly available datasets, discuss existing challenges, and propose future research directions.
arXiv Detail & Related papers (2024-10-22T01:08:21Z) - Affective Computing Has Changed: The Foundation Model Disruption [47.88090382507161]
We aim to raise awareness of the power of Foundation Models in the field of Affective Computing.
We synthetically generate and analyse multimodal affective data, focusing on vision, linguistics, and speech (acoustics)
We discuss some fundamental problems, such as ethical issues and regulatory aspects, related to the use of Foundation Models in this research area.
arXiv Detail & Related papers (2024-09-13T15:20:18Z) - A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges [60.546677053091685]
Large language models (LLMs) have unlocked novel opportunities for machine learning applications in the financial domain.
We explore the application of LLMs on various financial tasks, focusing on their potential to transform traditional practices and drive innovation.
We highlight this survey for categorizing the existing literature into key application areas, including linguistic tasks, sentiment analysis, financial time series, financial reasoning, agent-based modeling, and other applications.
arXiv Detail & Related papers (2024-06-15T16:11:35Z) - Training and Serving System of Foundation Models: A Comprehensive Survey [32.0115390377174]
This paper extensively explores the methods employed in training and serving foundation models from various perspectives.
It provides a detailed categorization of these state-of-the-art methods, including finer aspects such as network, computing, and storage.
arXiv Detail & Related papers (2024-01-05T05:27:15Z) - A Survey of Reasoning with Foundation Models [235.7288855108172]
Reasoning plays a pivotal role in various real-world settings such as negotiation, medical diagnosis, and criminal investigation.
We introduce seminal foundation models proposed or adaptable for reasoning.
We then delve into the potential future directions behind the emergence of reasoning abilities within foundation models.
arXiv Detail & Related papers (2023-12-17T15:16:13Z) - Large Models for Time Series and Spatio-Temporal Data: A Survey and
Outlook [95.32949323258251]
Temporal data, notably time series andtemporal-temporal data, are prevalent in real-world applications.
Recent advances in large language and other foundational models have spurred increased use in time series andtemporal data mining.
arXiv Detail & Related papers (2023-10-16T09:06:00Z) - Domain Specialization as the Key to Make Large Language Models Disruptive: A Comprehensive Survey [100.24095818099522]
Large language models (LLMs) have significantly advanced the field of natural language processing (NLP)
They provide a highly useful, task-agnostic foundation for a wide range of applications.
However, directly applying LLMs to solve sophisticated problems in specific domains meets many hurdles.
arXiv Detail & Related papers (2023-05-30T03:00:30Z) - Find the Funding: Entity Linking with Incomplete Funding Knowledge Bases [1.9451328614697954]
Two major challenges of identifying and linking funding entities are: (i) sparse graph structure of the Knowledge Base (KB), and (ii) missing entities in KB.
We propose an entity linking model that can perform NIL prediction and overcome data scarcity issues in a time and data-efficient manner.
arXiv Detail & Related papers (2022-09-01T10:41:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.