Generalization of CNNs on Relational Reasoning with Bar Charts
- URL: http://arxiv.org/abs/2503.00086v1
- Date: Fri, 28 Feb 2025 13:32:06 GMT
- Title: Generalization of CNNs on Relational Reasoning with Bar Charts
- Authors: Zhenxing Cui, Lu Chen, Yunhai Wang, Daniel Haehn, Yong Wang, Hanspeter Pfister,
- Abstract summary: We revisit previous experiments on graphical perception and update the benchmark performance of CNNs.<n>We test the generalization performance of CNNs on a classic relational reasoning task: estimating bar length ratios in a bar chart.<n>Our results show that CNNs outperform humans only when the training and test data have the same visual encodings.
- Score: 36.78931885142017
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a systematic study of the generalization of convolutional neural networks (CNNs) and humans on relational reasoning tasks with bar charts. We first revisit previous experiments on graphical perception and update the benchmark performance of CNNs. We then test the generalization performance of CNNs on a classic relational reasoning task: estimating bar length ratios in a bar chart, by progressively perturbing the standard visualizations. We further conduct a user study to compare the performance of CNNs and humans. Our results show that CNNs outperform humans only when the training and test data have the same visual encodings. Otherwise, they may perform worse. We also find that CNNs are sensitive to perturbations in various visual encodings, regardless of their relevance to the target bars. Yet, humans are mainly influenced by bar lengths. Our study suggests that robust relational reasoning with visualizations is challenging for CNNs. Improving CNNs' generalization performance may require training them to better recognize task-related visual properties.
Related papers
- A Neurosymbolic Framework for Bias Correction in Convolutional Neural Networks [2.249916681499244]
We introduce a neurosymbolic framework called NeSyBiCor for bias correction in a trained CNN.
We show that our framework successfully corrects the biases of CNNs trained with subsets of classes from the "Places" dataset.
arXiv Detail & Related papers (2024-05-24T19:09:53Z) - Transferability of Convolutional Neural Networks in Stationary Learning
Tasks [96.00428692404354]
We introduce a novel framework for efficient training of convolutional neural networks (CNNs) for large-scale spatial problems.
We show that a CNN trained on small windows of such signals achieves a nearly performance on much larger windows without retraining.
Our results show that the CNN is able to tackle problems with many hundreds of agents after being trained with fewer than ten.
arXiv Detail & Related papers (2023-07-21T13:51:45Z) - Improving the Accuracy and Robustness of CNNs Using a Deep CCA Neural
Data Regularizer [2.026424957803652]
As convolutional neural networks (CNNs) become more accurate at object recognition, their representations become more similar to the primate visual system.
Previous attempts to address this question showed very modest gains in accuracy, owing in part to limitations of the regularization method.
We develop a new neural data regularizer for CNNs that uses Deep Correlation Analysis (DCCA) to optimize the resemblance of the CNN's image representations to that of the monkey visual cortex.
arXiv Detail & Related papers (2022-09-06T15:40:39Z) - How explainable are adversarially-robust CNNs? [7.143109213647008]
Three important criteria of existing convolutional neural networks (CNNs) are (1) test-set accuracy; (2) out-of-distribution accuracy; and (3) explainability.
Here, we perform the first, large-scale evaluation of the relations of the three criteria using 9 feature-importance methods and 12 ImageNet-trained CNNs.
arXiv Detail & Related papers (2022-05-25T20:24:19Z) - CNNs Avoid Curse of Dimensionality by Learning on Patches [11.546219454021935]
We argue that convolutional neural networks (CNNs) operate on the domain of image patches.
Our work is the first to derive an a priori error bound for the generalization error of CNNs.
Our patch-based theory also offers explanation for why data augmentation techniques like Cutout, CutMix and random cropping are effective in improving the generalization error of CNNs.
arXiv Detail & Related papers (2022-05-22T06:22:27Z) - Controlled-rearing studies of newborn chicks and deep neural networks [0.0]
Convolutional neural networks (CNNs) can achieve human-level performance on challenging object recognition tasks.
CNNs are thought to be "data hungry," requiring massive amounts of training data to develop accurate models for object recognition.
This critique challenges the promise of using CNNs as models of visual development.
arXiv Detail & Related papers (2021-12-12T00:45:07Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Assessing The Importance Of Colours For CNNs In Object Recognition [70.70151719764021]
Convolutional neural networks (CNNs) have been shown to exhibit conflicting properties.
We demonstrate that CNNs often rely heavily on colour information while making a prediction.
We evaluate a model trained with congruent images on congruent, greyscale, and incongruent images.
arXiv Detail & Related papers (2020-12-12T22:55:06Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.