Automatic Modeling of Social Concepts Evoked by Art Images as Multimodal
Frames
- URL: http://arxiv.org/abs/2110.07420v1
- Date: Thu, 14 Oct 2021 14:50:22 GMT
- Title: Automatic Modeling of Social Concepts Evoked by Art Images as Multimodal
Frames
- Authors: Delfina Sol Martinez Pandiani and Valentina Presutti
- Abstract summary: Social concepts referring to non-physical objects are powerful tools to describe, index, and query the content of visual data.
We propose a software approach to represent social concepts as multimodal frames, by integrating multisensory data.
Our method focuses on the extraction, analysis, and integration of multimodal features from visual art material tagged with the concepts of interest.
- Score: 1.4502611532302037
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social concepts referring to non-physical objects--such as revolution,
violence, or friendship--are powerful tools to describe, index, and query the
content of visual data, including ever-growing collections of art images from
the Cultural Heritage (CH) field. While much progress has been made towards
complete image understanding in computer vision, automatic detection of social
concepts evoked by images is still a challenge. This is partly due to the
well-known semantic gap problem, worsened for social concepts given their lack
of unique physical features, and reliance on more unspecific features than
concrete concepts. In this paper, we propose the translation of recent
cognitive theories about social concept representation into a software approach
to represent them as multimodal frames, by integrating multisensory data. Our
method focuses on the extraction, analysis, and integration of multimodal
features from visual art material tagged with the concepts of interest. We
define a conceptual model and present a novel ontology for formally
representing social concepts as multimodal frames. Taking the Tate Gallery's
collection as an empirical basis, we experiment our method on a corpus of art
images to provide a proof of concept of its potential. We discuss further
directions of research, and provide all software, data sources, and results.
Related papers
- Explainable Concept Generation through Vision-Language Preference Learning [7.736445799116692]
Concept-based explanations have become a popular choice for explaining deep neural networks post-hoc.
We devise a reinforcement learning-based preference optimization algorithm that fine-tunes the vision-language generative model.
In addition to showing the efficacy and reliability of our method, we show how our method can be used as a diagnostic tool for analyzing neural networks.
arXiv Detail & Related papers (2024-08-24T02:26:42Z) - For a semiotic AI: Bridging computer vision and visual semiotics for computational observation of large scale facial image archives [3.418398936676879]
This work presents FRESCO, a framework designed to explore the socio-cultural implications of images on social media platforms at scale.
FRESCO deconstructs images into numerical and categorical variables using state-of-the-art computer vision techniques.
The framework analyzes images across three levels: the plastic level, encompassing fundamental visual features like lines and colors; the figurative level, representing specific entities or concepts; and the enunciation level, which focuses particularly on constructing the point of view of the spectator and observer.
arXiv Detail & Related papers (2024-07-03T16:57:38Z) - Visual Concept-driven Image Generation with Text-to-Image Diffusion Model [65.96212844602866]
Text-to-image (TTI) models have demonstrated impressive results in generating high-resolution images of complex scenes.
Recent approaches have extended these methods with personalization techniques that allow them to integrate user-illustrated concepts.
However, the ability to generate images with multiple interacting concepts, such as human subjects, as well as concepts that may be entangled in one, or across multiple, image illustrations remains illusive.
We propose a concept-driven TTI personalization framework that addresses these core challenges.
arXiv Detail & Related papers (2024-02-18T07:28:37Z) - M^2ConceptBase: A Fine-Grained Aligned Concept-Centric Multimodal Knowledge Base [61.53959791360333]
We introduce M2ConceptBase, the first concept-centric multimodal knowledge base (MMKB)
We propose a context-aware approach to align concept-image and concept-description pairs using context information from image-text datasets.
Human studies confirm more than 95% alignment accuracy, underscoring its quality.
arXiv Detail & Related papers (2023-12-16T11:06:11Z) - CLiC: Concept Learning in Context [54.81654147248919]
This paper builds upon recent advancements in visual concept learning.
It involves acquiring a visual concept from a source image and subsequently applying it to an object in a target image.
To localize the concept learning, we employ soft masks that contain both the concept within the mask and the surrounding image area.
arXiv Detail & Related papers (2023-11-28T01:33:18Z) - NEUCORE: Neural Concept Reasoning for Composed Image Retrieval [16.08214739525615]
We propose a NEUral COncept REasoning model which incorporates multi-modal concept alignment and progressive multimodal fusion over aligned concepts.
Our proposed approach is evaluated on three datasets and achieves state-of-the-art results.
arXiv Detail & Related papers (2023-10-02T17:21:25Z) - Text-to-Image Generation for Abstract Concepts [76.32278151607763]
We propose a framework of Text-to-Image generation for Abstract Concepts (TIAC)
The abstract concept is clarified into a clear intent with a detailed definition to avoid ambiguity.
The concept-dependent form is retrieved from an LLM-extracted form pattern set.
arXiv Detail & Related papers (2023-09-26T02:22:39Z) - ConceptBed: Evaluating Concept Learning Abilities of Text-to-Image
Diffusion Models [79.10890337599166]
We introduce ConceptBed, a large-scale dataset that consists of 284 unique visual concepts and 33K composite text prompts.
We evaluate visual concepts that are either objects, attributes, or styles, and also evaluate four dimensions of compositionality: counting, attributes, relations, and actions.
Our results point to a trade-off between learning the concepts and preserving the compositionality which existing approaches struggle to overcome.
arXiv Detail & Related papers (2023-06-07T18:00:38Z) - FALCON: Fast Visual Concept Learning by Integrating Images, Linguistic
descriptions, and Conceptual Relations [99.54048050189971]
We present a framework for learning new visual concepts quickly, guided by multiple naturally occurring data streams.
The learned concepts support downstream applications, such as answering questions by reasoning about unseen images.
We demonstrate the effectiveness of our model on both synthetic and real-world datasets.
arXiv Detail & Related papers (2022-03-30T19:45:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.