Inching Towards Automated Understanding of the Meaning of Art: An
Application to Computational Analysis of Mondrian's Artwork
- URL: http://arxiv.org/abs/2302.00594v1
- Date: Thu, 29 Dec 2022 23:34:19 GMT
- Title: Inching Towards Automated Understanding of the Meaning of Art: An
Application to Computational Analysis of Mondrian's Artwork
- Authors: Alex Doboli, Mahan Agha Zahedi, Niloofar Gholamrezaei
- Abstract summary: This paper attempts to identify capabilities that are related to semantic processing.
The proposed methodology identifies the missing capabilities by comparing the process of understanding Mondrian's paintings with the process of understanding electronic circuit designs.
To explain the usefulness of the methodology, the paper discusses a new, three-step computational method to distinguish Mondrian's paintings from other artwork.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Neural Networks (DNNs) have been successfully used in classifying
digital images but have been less successful in classifying images with
meanings that are not linear combinations of their visualized features, like
images of artwork. Moreover, it is unknown what additional features must be
included into DNNs, so that they can possibly classify using features beyond
visually displayed features, like color, size, and form. Non-displayed features
are important in abstract representations, reasoning, and understanding
ambiguous expressions, which are arguably topics less studied by current AI
methods. This paper attempts to identify capabilities that are related to
semantic processing, a current limitation of DNNs. The proposed methodology
identifies the missing capabilities by comparing the process of understanding
Mondrian's paintings with the process of understanding electronic circuit
designs, another creative problem solving instance. The compared entities are
cognitive architectures that attempt to loosely mimic cognitive activities. The
paper offers a detailed presentation of the characteristics of the
architectural components, like goals, concepts, ideas, rules, procedures,
beliefs, expectations, and outcomes. To explain the usefulness of the
methodology, the paper discusses a new, three-step computational method to
distinguish Mondrian's paintings from other artwork. The method includes in a
backward order the cognitive architecture's components that operate only with
the characteristics of the available data.
Related papers
- Explainable Concept Generation through Vision-Language Preference Learning [7.736445799116692]
Concept-based explanations have become a popular choice for explaining deep neural networks post-hoc.
We devise a reinforcement learning-based preference optimization algorithm that fine-tunes the vision-language generative model.
In addition to showing the efficacy and reliability of our method, we show how our method can be used as a diagnostic tool for analyzing neural networks.
arXiv Detail & Related papers (2024-08-24T02:26:42Z) - Feature CAM: Interpretable AI in Image Classification [2.4409988934338767]
There is a lack of trust to use Artificial Intelligence in critical and high-precision fields such as security, finance, health, and manufacturing industries.
We introduce a novel technique Feature CAM, which falls in the perturbation-activation combination, to create fine-grained, class-discriminative visualizations.
The resulting saliency maps proved to be 3-4 times better human interpretable than the state-of-the-art in ABM.
arXiv Detail & Related papers (2024-03-08T20:16:00Z) - ARTxAI: Explainable Artificial Intelligence Curates Deep Representation
Learning for Artistic Images using Fuzzy Techniques [11.286457041998569]
We show how the features obtained from different tasks in artistic image classification are suitable to solve other ones of similar nature.
We propose an explainable artificial intelligence method to map known visual traits of an image with the features used by the deep learning model.
arXiv Detail & Related papers (2023-08-29T13:15:13Z) - A domain adaptive deep learning solution for scanpath prediction of
paintings [66.46953851227454]
This paper focuses on the eye-movement analysis of viewers during the visual experience of a certain number of paintings.
We introduce a new approach to predicting human visual attention, which impacts several cognitive functions for humans.
The proposed new architecture ingests images and returns scanpaths, a sequence of points featuring a high likelihood of catching viewers' attention.
arXiv Detail & Related papers (2022-09-22T22:27:08Z) - Separating Skills and Concepts for Novel Visual Question Answering [66.46070380927372]
Generalization to out-of-distribution data has been a problem for Visual Question Answering (VQA) models.
"Skills" are visual tasks, such as counting or attribute recognition, and are applied to "concepts" mentioned in the question.
We present a novel method for learning to compose skills and concepts that separates these two factors implicitly within a model.
arXiv Detail & Related papers (2021-07-19T18:55:10Z) - pix2rule: End-to-end Neuro-symbolic Rule Learning [84.76439511271711]
This paper presents a complete neuro-symbolic method for processing images into objects, learning relations and logical rules.
The main contribution is a differentiable layer in a deep learning architecture from which symbolic relations and rules can be extracted.
We demonstrate that our model scales beyond state-of-the-art symbolic learners and outperforms deep relational neural network architectures.
arXiv Detail & Related papers (2021-06-14T15:19:06Z) - Graph Neural Networks for Knowledge Enhanced Visual Representation of
Paintings [14.89186519385364]
ArtSAGENet is a novel architecture that integrates Graph Neural Networks (GNNs) and Convolutional Neural Networks (CNNs)
We show that our proposed ArtSAGENet captures and encodes valuable dependencies between the artists and the artworks.
Our findings underline a great potential of integrating visual content and semantics for fine art analysis and curation.
arXiv Detail & Related papers (2021-05-17T23:05:36Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.