Finding Biological Plausibility for Adversarially Robust Features via
Metameric Tasks
- URL: http://arxiv.org/abs/2202.00838v2
- Date: Fri, 4 Feb 2022 00:24:45 GMT
- Title: Finding Biological Plausibility for Adversarially Robust Features via
Metameric Tasks
- Authors: Anne Harrington and Arturo Deza
- Abstract summary: We show that adversarially robust representations capture peripheral computation better than non-robust representations.
Our findings support the idea that localized texture summary statistic representations may drive human in robustness to adversarials.
- Score: 3.3504365823045044
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work suggests that representations learned by adversarially robust
networks are more human perceptually-aligned than non-robust networks via image
manipulations. Despite appearing closer to human visual perception, it is
unclear if the constraints in robust DNN representations match biological
constraints found in human vision. Human vision seems to rely on
texture-based/summary statistic representations in the periphery, which have
been shown to explain phenomena such as crowding and performance on visual
search tasks. To understand how adversarially robust
optimizations/representations compare to human vision, we performed a
psychophysics experiment using a set of metameric discrimination tasks where we
evaluated how well human observers could distinguish between images synthesized
to match adversarially robust representations compared to non-robust
representations and a texture synthesis model of peripheral vision (Texforms).
We found that the discriminability of robust representation and texture model
images decreased to near chance performance as stimuli were presented farther
in the periphery. Moreover, performance on robust and texture-model images
showed similar trends within participants, while performance on non-robust
representations changed minimally across the visual field. These results
together suggest that (1) adversarially robust representations capture
peripheral computation better than non-robust representations and (2) robust
representations capture peripheral computation similar to current
state-of-the-art texture peripheral vision models. More broadly, our findings
support the idea that localized texture summary statistic representations may
drive human invariance to adversarial perturbations and that the incorporation
of such representations in DNNs could give rise to useful properties like
adversarial robustness.
Related papers
- When Does Perceptual Alignment Benefit Vision Representations? [76.32336818860965]
We investigate how aligning vision model representations to human perceptual judgments impacts their usability.
We find that aligning models to perceptual judgments yields representations that improve upon the original backbones across many downstream tasks.
Our results suggest that injecting an inductive bias about human perceptual knowledge into vision models can contribute to better representations.
arXiv Detail & Related papers (2024-10-14T17:59:58Z) - Estimating the distribution of numerosity and non-numerical visual magnitudes in natural scenes using computer vision [0.08192907805418582]
We show that in natural visual scenes the frequency of appearance of different numerosities follows a power law distribution.
We show that the correlational structure for numerosity and continuous magnitudes is stable across datasets and scene types.
arXiv Detail & Related papers (2024-09-17T09:49:29Z) - Towards Evaluating the Robustness of Visual State Space Models [63.14954591606638]
Vision State Space Models (VSSMs) have demonstrated remarkable performance in visual perception tasks.
However, their robustness under natural and adversarial perturbations remains a critical concern.
We present a comprehensive evaluation of VSSMs' robustness under various perturbation scenarios.
arXiv Detail & Related papers (2024-06-13T17:59:44Z) - Leveraging the Human Ventral Visual Stream to Improve Neural Network Robustness [8.419105840498917]
Human object recognition exhibits remarkable resilience in cluttered and dynamic visual environments.
Despite their unparalleled performance across numerous visual tasks, Deep Neural Networks (DNNs) remain far less robust than humans.
Here we show that DNNs, when guided by neural representations from a hierarchical sequence of regions in the human ventral visual stream, display increasing robustness to adversarial attacks.
arXiv Detail & Related papers (2024-05-04T04:33:20Z) - Neural Clustering based Visual Representation Learning [61.72646814537163]
Clustering is one of the most classic approaches in machine learning and data analysis.
We propose feature extraction with clustering (FEC), which views feature extraction as a process of selecting representatives from data.
FEC alternates between grouping pixels into individual clusters to abstract representatives and updating the deep features of pixels with current representatives.
arXiv Detail & Related papers (2024-03-26T06:04:50Z) - Zero-shot visual reasoning through probabilistic analogical mapping [2.049767929976436]
We present visiPAM (visual Probabilistic Analogical Mapping), a model of visual reasoning that synthesizes two approaches.
We show that without any direct training, visiPAM outperforms a state-of-the-art deep learning model on an analogical mapping task.
In addition, visiPAM closely matches the pattern of human performance on a novel task involving mapping of 3D objects across disparate categories.
arXiv Detail & Related papers (2022-09-29T20:29:26Z) - Human Eyes Inspired Recurrent Neural Networks are More Robust Against Adversarial Noises [7.689542442882423]
We designed a dual-stream vision model inspired by the human brain.
This model features retina-like input layers and includes two streams: one determining the next point of focus (the fixation), while the other interprets the visuals surrounding the fixation.
We evaluated this model against various benchmarks in terms of object recognition, gaze behavior and adversarial robustness.
arXiv Detail & Related papers (2022-06-15T03:44:42Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z) - DRG: Dual Relation Graph for Human-Object Interaction Detection [65.50707710054141]
We tackle the challenging problem of human-object interaction (HOI) detection.
Existing methods either recognize the interaction of each human-object pair in isolation or perform joint inference based on complex appearance-based features.
In this paper, we leverage an abstract spatial-semantic representation to describe each human-object pair and aggregate the contextual information of the scene via a dual relation graph.
arXiv Detail & Related papers (2020-08-26T17:59:40Z) - Adversarial Semantic Data Augmentation for Human Pose Estimation [96.75411357541438]
We propose Semantic Data Augmentation (SDA), a method that augments images by pasting segmented body parts with various semantic granularity.
We also propose Adversarial Semantic Data Augmentation (ASDA), which exploits a generative network to dynamiclly predict tailored pasting configuration.
State-of-the-art results are achieved on challenging benchmarks.
arXiv Detail & Related papers (2020-08-03T07:56:04Z) - Seeing eye-to-eye? A comparison of object recognition performance in
humans and deep convolutional neural networks under image manipulation [0.0]
This study aims towards a behavioral comparison of visual core object recognition performance between humans and feedforward neural networks.
Analyses of accuracy revealed that humans not only outperform DCNNs on all conditions, but also display significantly greater robustness towards shape and most notably color alterations.
arXiv Detail & Related papers (2020-07-13T10:26:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.