Conceptual Modeling and Artificial Intelligence: Mutual Benefits from
Complementary Worlds
- URL: http://arxiv.org/abs/2110.08637v1
- Date: Sat, 16 Oct 2021 18:42:09 GMT
- Title: Conceptual Modeling and Artificial Intelligence: Mutual Benefits from
Complementary Worlds
- Authors: Dominik Bork
- Abstract summary: We are interested in tackling the intersection of the two, thus far, mostly isolated approached disciplines of CM and AI.
The workshop embraces the assumption, that manifold mutual benefits can be realized by i) investigating what Conceptual Modeling (CM) can contribute to AI, and ii) the other way around.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Conceptual modeling (CM) applies abstraction to reduce the complexity of a
system under study (e.g., an excerpt of reality). As a result of the conceptual
modeling process a human interpretable, formalized representation (i.e., a
conceptual model) is derived which enables understanding and communication
among humans, and processing by machines. Artificial Intelligence (AI)
algorithms are also applied to complex realities (regularly represented by vast
amounts of data) to identify patterns or to classify entities in the data.
Aside from the commonalities of both approaches, a significant difference can
be observed by looking at the results. While conceptual models are
comprehensible, reproducible, and explicit knowledge representations, AI
techniques are capable of efficiently deriving an output from a given input
while acting as a black box. AI solutions often lack comprehensiveness and
reproducibility. Even the developers of AI systems can't explain why a certain
output is derived. In the Conceptual Modeling meets Artificial Intelligence
(CMAI) workshop, we are interested in tackling the intersection of the two,
thus far, mostly isolated approached disciplines of CM and AI. The workshop
embraces the assumption, that manifold mutual benefits can be realized by i)
investigating what Conceptual Modeling (CM) can contribute to AI, and ii) the
other way around, what Artificial Intelligence (AI) can contribute to CM.
Related papers
- Cognition is All You Need -- The Next Layer of AI Above Large Language
Models [0.0]
We present Cognitive AI, a framework for neurosymbolic cognition outside of large language models.
We propose that Cognitive AI is a necessary precursor for the evolution of the forms of AI, such as AGI, and specifically claim that AGI cannot be achieved by probabilistic approaches on their own.
We conclude with a discussion of the implications for large language models, adoption cycles in AI, and commercial Cognitive AI development.
arXiv Detail & Related papers (2024-03-04T16:11:57Z) - On the Emergence of Symmetrical Reality [51.21203247240322]
We introduce the symmetrical reality framework, which offers a unified representation encompassing various forms of physical-virtual amalgamations.
We propose an instance of an AI-driven active assistance service that illustrates the potential applications of symmetrical reality.
arXiv Detail & Related papers (2024-01-26T16:09:39Z) - Conceptual Modeling and Artificial Intelligence: A Systematic Mapping
Study [0.5156484100374059]
In conceptual modeling (CM), humans apply abstraction to represent excerpts of reality for means of understanding and communication, and processing by machines.
Recently, a trend toward intertwining CM and AI emerged.
This systematic mapping study shows how this interdisciplinary research field is structured, which mutual benefits are gained by the intertwining, and future research directions.
arXiv Detail & Related papers (2023-03-12T21:23:46Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - Explainable AI without Interpretable Model [0.0]
It has become more important than ever that AI systems would be able to explain the reasoning behind their results to end-users.
Most Explainable AI (XAI) methods are based on extracting an interpretable model that can be used for producing explanations.
The notions of Contextual Importance and Utility (CIU) presented in this paper make it possible to produce human-like explanations of black-box outcomes directly.
arXiv Detail & Related papers (2020-09-29T13:29:44Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - AI from concrete to abstract: demystifying artificial intelligence to
the general public [0.0]
This article presents a new methodology, AI from concrete to abstract (AIcon2abs)
The main strategy adopted by is to promote a demystification of artificial intelligence.
The simplicity of the WiSARD weightless artificial neural network model enables easy visualization and understanding of training and classification tasks.
arXiv Detail & Related papers (2020-06-07T01:14:06Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.