Ontology and Cognitive Outcomes
- URL: http://arxiv.org/abs/2005.08078v3
- Date: Fri, 8 Jan 2021 14:49:58 GMT
- Title: Ontology and Cognitive Outcomes
- Authors: David Limbaugh, Jobst Landgrebe, David Kasmier, Ronald Rudnicki, James
Llinas, Barry Smith
- Abstract summary: The intelligence community (IC) of the United States (US) is a community of organizations that collaborate in collecting and processing intelligence for the US.
The IC relies on human-machine-based analytic strategies that 1) access and integrate vast amounts of information from disparate sources, 2) continuously process this information, so that, 3) a maximally comprehensive understanding of world actors and their behaviors can be developed and updated.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Here we understand 'intelligence' as referring to items of knowledge
collected for the sake of assessing and maintaining national security. The
intelligence community (IC) of the United States (US) is a community of
organizations that collaborate in collecting and processing intelligence for
the US. The IC relies on human-machine-based analytic strategies that 1) access
and integrate vast amounts of information from disparate sources, 2)
continuously process this information, so that, 3) a maximally comprehensive
understanding of world actors and their behaviors can be developed and updated.
Herein we describe an approach to utilizing outcomes-based learning (OBL) to
support these efforts that is based on an ontology of the cognitive processes
performed by intelligence analysts. Of particular importance to the Cognitive
Process Ontology is the class Representation that is Warranted. Such a
representation is descriptive in nature and deserving of trust in its
veridicality. The latter is because a Representation that is Warranted is
always produced by a process that was vetted (or successfully designed) to
reliably produce veridical representations. As such, Representations that are
Warranted are what in other contexts we might refer to as 'items of knowledge'.
Related papers
- Intelligence Education made in Europe [0.0]
We show how joint intelligence education can succeed.
We draw on the experience of Germany, where all intelligence services and the Bundeswehr are academically educated together.
We show how these experiences have been successfully transferred to a European level, namely to ICE.
arXiv Detail & Related papers (2024-04-18T12:25:46Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Brain in a Vat: On Missing Pieces Towards Artificial General
Intelligence in Large Language Models [83.63242931107638]
We propose four characteristics of generally intelligent agents.
We argue that active engagement with objects in the real world delivers more robust signals for forming conceptual representations.
We conclude by outlining promising future research directions in the field of artificial general intelligence.
arXiv Detail & Related papers (2023-07-07T13:58:16Z) - A Study of Situational Reasoning for Traffic Understanding [63.45021731775964]
We devise three novel text-based tasks for situational reasoning in the traffic domain.
We adopt four knowledge-enhanced methods that have shown generalization capability across language reasoning tasks in prior work.
We provide in-depth analyses of model performance on data partitions and examine model predictions categorically.
arXiv Detail & Related papers (2023-06-05T01:01:12Z) - Designing Ecosystems of Intelligence from First Principles [34.429740648284685]
This white paper lays out a vision of research and development in the field of artificial intelligence for the next decade (and beyond)
Its denouement is a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants.
This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence.
arXiv Detail & Related papers (2022-12-02T18:24:06Z) - Does Knowledge Help General NLU? An Empirical Study [13.305282275999781]
We investigate the contribution of external knowledge by measuring the end-to-end performance of language models.
We find that the introduction of knowledge can significantly improve the results on certain tasks while having no adverse effects on other tasks.
arXiv Detail & Related papers (2021-09-01T18:17:36Z) - Contextualized Knowledge-aware Attentive Neural Network: Enhancing
Answer Selection with Knowledge [77.77684299758494]
We extensively investigate approaches to enhancing the answer selection model with external knowledge from knowledge graph (KG)
First, we present a context-knowledge interaction learning framework, Knowledge-aware Neural Network (KNN), which learns the QA sentence representations by considering a tight interaction with the external knowledge from KG and the textual information.
To handle the diversity and complexity of KG information, we propose a Contextualized Knowledge-aware Attentive Neural Network (CKANN), which improves the knowledge representation learning with structure information via a customized Graph Convolutional Network (GCN) and comprehensively learns context-based and knowledge-based sentence representation via
arXiv Detail & Related papers (2021-04-12T05:52:20Z) - KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain
Knowledge-Based VQA [107.7091094498848]
One of the most challenging question types in VQA is when answering the question requires outside knowledge not present in the image.
In this work we study open-domain knowledge, the setting when the knowledge required to answer a question is not given/annotated, neither at training nor test time.
We tap into two types of knowledge representations and reasoning. First, implicit knowledge which can be learned effectively from unsupervised language pre-training and supervised training data with transformer-based models.
arXiv Detail & Related papers (2020-12-20T20:13:02Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.