Impacts of the Numbers of Colors and Shapes on Outlier Detection: from
Automated to User Evaluation
- URL: http://arxiv.org/abs/2103.06084v1
- Date: Wed, 10 Mar 2021 14:35:53 GMT
- Title: Impacts of the Numbers of Colors and Shapes on Outlier Detection: from
Automated to User Evaluation
- Authors: Loann Giovannangeli, Romain Giot, David Auber and Romain Bourqui
- Abstract summary: This paper contributes to the theme by extending visual search theories to an information visualization context.
We consider a visual search task where subjects are asked to find an unknown outlier in a grid of randomly laid out distractor.
The results show that the major difficulty factor is the number of visual attributes that are used to encode the outlier.
- Score: 1.7205106391379026
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The design of efficient representations is well established as a fruitful way
to explore and analyze complex or large data. In these representations, data
are encoded with various visual attributes depending on the needs of the
representation itself. To make coherent design choices about visual attributes,
the visual search field proposes guidelines based on the human brain perception
of features. However, information visualization representations frequently need
to depict more data than the amount these guidelines have been validated on.
Since, the information visualization community has extended these guidelines to
a wider parameter space.
This paper contributes to this theme by extending visual search theories to
an information visualization context. We consider a visual search task where
subjects are asked to find an unknown outlier in a grid of randomly laid out
distractor. Stimuli are defined by color and shape features for the purpose of
visually encoding categorical data. The experimental protocol is made of a
parameters space reduction step (i.e., sub-sampling) based on a machine
learning model, and a user evaluation to measure capacity limits and validate
hypotheses. The results show that the major difficulty factor is the number of
visual attributes that are used to encode the outlier. When redundantly
encoded, the display heterogeneity has no effect on the task. When encoded with
one attribute, the difficulty depends on that attribute heterogeneity until its
capacity limit (7 for color, 5 for shape) is reached. Finally, when encoded
with two attributes simultaneously, performances drop drastically even with
minor heterogeneity.
Related papers
- Attribute-Aware Deep Hashing with Self-Consistency for Large-Scale
Fine-Grained Image Retrieval [65.43522019468976]
We propose attribute-aware hashing networks with self-consistency for generating attribute-aware hash codes.
We develop an encoder-decoder structure network of a reconstruction task to unsupervisedly distill high-level attribute-specific vectors.
Our models are equipped with a feature decorrelation constraint upon these attribute vectors to strengthen their representative abilities.
arXiv Detail & Related papers (2023-11-21T08:20:38Z) - Exploring Fine-Grained Representation and Recomposition for Cloth-Changing Person Re-Identification [78.52704557647438]
We propose a novel FIne-grained Representation and Recomposition (FIRe$2$) framework to tackle both limitations without any auxiliary annotation or data.
Experiments demonstrate that FIRe$2$ can achieve state-of-the-art performance on five widely-used cloth-changing person Re-ID benchmarks.
arXiv Detail & Related papers (2023-08-21T12:59:48Z) - Towards the Visualization of Aggregated Class Activation Maps to Analyse
the Global Contribution of Class Features [0.47248250311484113]
Class Activation Maps (CAMs) visualizes the importance of each feature of a data sample contributing to the classification.
We aggregate CAMs from multiple samples to show a global explanation of the classification for semantically structured data.
Our approach allows an analyst to detect important features of high-dimensional data and derive adjustments to the AI model based on our global explanation visualization.
arXiv Detail & Related papers (2023-07-29T11:13:11Z) - The Influences of Color and Shape Features in Visual Contrastive
Learning [0.0]
This paper investigates the influences of individual image features (e.g., color and shape) to model performance remain ambiguous.
Experimental results show that compared with supervised representations, contrastive representations tend to cluster with objects of similar color.
arXiv Detail & Related papers (2023-01-29T15:10:14Z) - Towards Unsupervised Visual Reasoning: Do Off-The-Shelf Features Know
How to Reason? [30.16956370267339]
We introduce a protocol to evaluate visual representations for the task of Visual Question Answering.
In order to decouple visual feature extraction from reasoning, we design a specific attention-based reasoning module.
We compare two types of visual representations, densely extracted local features and object-centric ones, against the performances of a perfect image representation using ground truth.
arXiv Detail & Related papers (2022-12-20T14:36:45Z) - Measuring the Interpretability of Unsupervised Representations via
Quantized Reverse Probing [97.70862116338554]
We investigate the problem of measuring interpretability of self-supervised representations.
We formulate the latter as estimating the mutual information between the representation and a space of manually labelled concepts.
We use our method to evaluate a large number of self-supervised representations, ranking them by interpretability.
arXiv Detail & Related papers (2022-09-07T16:18:50Z) - Causal Transportability for Visual Recognition [70.13627281087325]
We show that standard classifiers fail because the association between images and labels is not transportable across settings.
We then show that the causal effect, which severs all sources of confounding, remains invariant across domains.
This motivates us to develop an algorithm to estimate the causal effect for image classification.
arXiv Detail & Related papers (2022-04-26T15:02:11Z) - Quantifying Learnability and Describability of Visual Concepts Emerging
in Representation Learning [91.58529629419135]
We consider how to characterise visual groupings discovered automatically by deep neural networks.
We introduce two concepts, visual learnability and describability, that can be used to quantify the interpretability of arbitrary image groupings.
arXiv Detail & Related papers (2020-10-27T18:41:49Z) - Gravitational Models Explain Shifts on Human Visual Attention [80.76475913429357]
Visual attention refers to the human brain's ability to select relevant sensory information for preferential processing.
Various methods to estimate saliency have been proposed in the last three decades.
We propose a gravitational model (GRAV) to describe the attentional shifts.
arXiv Detail & Related papers (2020-09-15T10:12:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.