Marriage is a Peach and a Chalice: Modelling Cultural Symbolism on the
SemanticWeb
- URL: http://arxiv.org/abs/2111.02123v1
- Date: Wed, 3 Nov 2021 10:40:50 GMT
- Title: Marriage is a Peach and a Chalice: Modelling Cultural Symbolism on the
SemanticWeb
- Authors: Bruno Sartini, Marieke van Erp, Aldo Gangemi
- Abstract summary: We introduce the Simulation Ontology, an ontology that models the background knowledge of symbolic meanings.
We re-engineered the symbolic knowledge already present in heterogeneous resources by converting it into our ontology schema to create HyperReal.
A first experiment run on the knowledge graph is presented to show the potential of quantitative research on symbolism.
- Score: 0.5801044612920815
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we fill the gap in the Semantic Web in the context of Cultural
Symbolism. Building upon earlier work in, we introduce the Simulation Ontology,
an ontology that models the background knowledge of symbolic meanings,
developed by combining the concepts taken from the authoritative theory of
Simulacra and Simulations of Jean Baudrillard with symbolic structures and
content taken from "Symbolism: a Comprehensive Dictionary" by Steven Olderr. We
re-engineered the symbolic knowledge already present in heterogeneous resources
by converting it into our ontology schema to create HyperReal, the first
knowledge graph completely dedicated to cultural symbolism. A first experiment
run on the knowledge graph is presented to show the potential of quantitative
research on symbolism.
Related papers
- Simulation of Non-Ordinary Consciousness [0.0]
generative symbolic interface designed to simulate psilocybin-like symbolic cognition.
Glyph consistently generates high-entropy, metaphor-saturated, and ego-dissolving language.
arXiv Detail & Related papers (2025-03-29T23:04:04Z) - Neural-Symbolic Reasoning over Knowledge Graphs: A Survey from a Query Perspective [55.79507207292647]
Knowledge graph reasoning is pivotal in various domains such as data mining, artificial intelligence, the Web, and social sciences.
The rise of Neural AI marks a significant advancement, merging the robustness of deep learning with the precision of symbolic reasoning.
The advent of large language models (LLMs) has opened new frontiers in knowledge graph reasoning.
arXiv Detail & Related papers (2024-11-30T18:54:08Z) - What Makes a Maze Look Like a Maze? [92.80800000328277]
We introduce Deep Grounding (DSG), a framework that leverages explicit structured representations of visual abstractions for grounding and reasoning.
At the core of DSG are schemas--dependency graph descriptions of abstract concepts that decompose them into more primitive-level symbols.
We show that DSG significantly improves the abstract visual reasoning performance of vision-language models.
arXiv Detail & Related papers (2024-09-12T16:41:47Z) - IICONGRAPH: improved Iconographic and Iconological Statements in
Knowledge Graphs [0.0]
IICONGRAPH is a KG that was created by refining and extending the iconographic and iconological statements of ArCo and Wikidata.
IICONGRAPH is released and documented in accordance with the FAIR principles to guarantee the resource's reusability.
arXiv Detail & Related papers (2024-01-24T15:44:16Z) - Symbol-LLM: Leverage Language Models for Symbolic System in Visual Human
Activity Reasoning [58.5857133154749]
We propose a new symbolic system with broad-coverage symbols and rational rules.
We leverage the recent advancement of LLMs as an approximation of the two ideal properties.
Our method shows superiority in extensive activity understanding tasks.
arXiv Detail & Related papers (2023-11-29T05:27:14Z) - Kiki or Bouba? Sound Symbolism in Vision-and-Language Models [13.300199242824934]
We show that sound symbolism is reflected in vision-and-language models such as CLIP and Stable Diffusion.
Our work provides a novel method for demonstrating sound symbolism and understanding its nature using computational tools.
arXiv Detail & Related papers (2023-10-25T17:15:55Z) - Text-to-Image Generation for Abstract Concepts [76.32278151607763]
We propose a framework of Text-to-Image generation for Abstract Concepts (TIAC)
The abstract concept is clarified into a clear intent with a detailed definition to avoid ambiguity.
The concept-dependent form is retrieved from an LLM-extracted form pattern set.
arXiv Detail & Related papers (2023-09-26T02:22:39Z) - Semantics, Ontology and Explanation [0.0]
We discuss the relation between ontological unpacking and other forms of explanation in philosophy and science.
We also discuss the relation between ontological unpacking and other forms of explanation in the area of Artificial Intelligence.
arXiv Detail & Related papers (2023-04-21T16:54:34Z) - Dual Embodied-Symbolic Concept Representations for Deep Learning [0.8722210937404288]
We advocate the use of a dual-level model for concept representations.
The embodied level consists of concept-oriented feature representations, and the symbolic level consists of concept graphs.
We discuss two important use cases: embodied-symbolic knowledge distillation for few-shot class incremental learning, and embodied-symbolic fused representation for image-text matching.
arXiv Detail & Related papers (2022-03-01T16:40:12Z) - Emergent Graphical Conventions in a Visual Communication Game [80.79297387339614]
Humans communicate with graphical sketches apart from symbolic languages.
We take the very first step to model and simulate such an evolution process via two neural agents playing a visual communication game.
We devise a novel reinforcement learning method such that agents are evolved jointly towards successful communication and abstract graphical conventions.
arXiv Detail & Related papers (2021-11-28T18:59:57Z) - pix2rule: End-to-end Neuro-symbolic Rule Learning [84.76439511271711]
This paper presents a complete neuro-symbolic method for processing images into objects, learning relations and logical rules.
The main contribution is a differentiable layer in a deep learning architecture from which symbolic relations and rules can be extracted.
We demonstrate that our model scales beyond state-of-the-art symbolic learners and outperforms deep relational neural network architectures.
arXiv Detail & Related papers (2021-06-14T15:19:06Z) - Interpretable Visual Reasoning via Induced Symbolic Space [75.95241948390472]
We study the problem of concept induction in visual reasoning, i.e., identifying concepts and their hierarchical relationships from question-answer pairs associated with images.
We first design a new framework named object-centric compositional attention model (OCCAM) to perform the visual reasoning task with object-level visual features.
We then come up with a method to induce concepts of objects and relations using clues from the attention patterns between objects' visual features and question words.
arXiv Detail & Related papers (2020-11-23T18:21:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.