DELAUNAY: a dataset of abstract art for psychophysical and machine
learning research
- URL: http://arxiv.org/abs/2201.12123v1
- Date: Fri, 28 Jan 2022 13:57:32 GMT
- Title: DELAUNAY: a dataset of abstract art for psychophysical and machine
learning research
- Authors: Camille Gontier, Jakob Jordan, Mihai A. Petrovici
- Abstract summary: We introduce DELAUNAY, a dataset of abstract paintings and non-figurative art objects labelled by the artists' names.
This dataset provides a middle ground between natural images and artificial patterns and can thus be used in a variety of contexts.
We train an off-the-shelf convolutional neural network on DELAUNAY, highlighting several of its intriguing features.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image datasets are commonly used in psychophysical experiments and in machine
learning research. Most publicly available datasets are comprised of images of
realistic and natural objects. However, while typical machine learning models
lack any domain specific knowledge about natural objects, humans can leverage
prior experience for such data, making comparisons between artificial and
natural learning challenging. Here, we introduce DELAUNAY, a dataset of
abstract paintings and non-figurative art objects labelled by the artists'
names. This dataset provides a middle ground between natural images and
artificial patterns and can thus be used in a variety of contexts, for example
to investigate the sample efficiency of humans and artificial neural networks.
Finally, we train an off-the-shelf convolutional neural network on DELAUNAY,
highlighting several of its intriguing features.
Related papers
- PUG: Photorealistic and Semantically Controllable Synthetic Data for
Representation Learning [31.81199165450692]
We present a new generation of interactive environments for representation learning research that offer both controllability and realism.
We use the Unreal Engine, a powerful game engine well known in the entertainment industry, to produce PUG environments and datasets for representation learning.
arXiv Detail & Related papers (2023-08-08T01:33:13Z) - CIFAKE: Image Classification and Explainable Identification of
AI-Generated Synthetic Images [7.868449549351487]
This article proposes to enhance our ability to recognise AI-generated images through computer vision.
The two sets of data present as a binary classification problem with regard to whether the photograph is real or generated by AI.
This study proposes the use of a Convolutional Neural Network (CNN) to classify the images into two categories; Real or Fake.
arXiv Detail & Related papers (2023-03-24T16:33:06Z) - Procedural Humans for Computer Vision [1.9550079119934403]
We build a parametric model of the face and body, including articulated hands, to generate realistic images of humans based on this body model.
We show that this can be extended to include the full body by building on the pipeline of Wood et al. to generate synthetic images of humans in their entirety.
arXiv Detail & Related papers (2023-01-03T15:44:48Z) - Is synthetic data from generative models ready for image recognition? [69.42645602062024]
We study whether and how synthetic images generated from state-of-the-art text-to-image generation models can be used for image recognition tasks.
We showcase the powerfulness and shortcomings of synthetic data from existing generative models, and propose strategies for better applying synthetic data for recognition tasks.
arXiv Detail & Related papers (2022-10-14T06:54:24Z) - Neural Novel Actor: Learning a Generalized Animatable Neural
Representation for Human Actors [98.24047528960406]
We propose a new method for learning a generalized animatable neural representation from a sparse set of multi-view imagery of multiple persons.
The learned representation can be used to synthesize novel view images of an arbitrary person from a sparse set of cameras, and further animate them with the user's pose control.
arXiv Detail & Related papers (2022-08-25T07:36:46Z) - A Review of Deep Learning Techniques for Markerless Human Motion on
Synthetic Datasets [0.0]
Estimating human posture has recently gained increasing attention in the computer vision community.
We present a model that can predict the skeleton of an animation based solely on 2D images.
The implementation process uses DeepLabCut on its own dataset to perform many necessary steps.
arXiv Detail & Related papers (2022-01-07T15:42:50Z) - Scene Synthesis via Uncertainty-Driven Attribute Synchronization [52.31834816911887]
This paper introduces a novel neural scene synthesis approach that can capture diverse feature patterns of 3D scenes.
Our method combines the strength of both neural network-based and conventional scene synthesis approaches.
arXiv Detail & Related papers (2021-08-30T19:45:07Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z) - The Intrinsic Dimension of Images and Its Impact on Learning [60.811039723427676]
It is widely believed that natural image data exhibits low-dimensional structure despite the high dimensionality of conventional pixel representations.
In this work, we apply dimension estimation tools to popular datasets and investigate the role of low-dimensional structure in deep learning.
arXiv Detail & Related papers (2021-04-18T16:29:23Z) - Insights From A Large-Scale Database of Material Depictions In Paintings [18.2193253052961]
We examine the give-and-take relationship between visual recognition systems and the rich information available in the fine arts.
We find that visual recognition systems designed for natural images can work surprisingly well on paintings.
We show that learning from paintings can be beneficial for neural networks that are intended to be used on natural images.
arXiv Detail & Related papers (2020-11-24T18:42:58Z) - What Can You Learn from Your Muscles? Learning Visual Representation
from Human Interactions [50.435861435121915]
We use human interaction and attention cues to investigate whether we can learn better representations compared to visual-only representations.
Our experiments show that our "muscly-supervised" representation outperforms a visual-only state-of-the-art method MoCo.
arXiv Detail & Related papers (2020-10-16T17:46:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.