Hamming Similarity and Graph Laplacians for Class Partitioning and
Adversarial Image Detection
- URL: http://arxiv.org/abs/2305.01808v2
- Date: Sat, 6 May 2023 00:06:26 GMT
- Title: Hamming Similarity and Graph Laplacians for Class Partitioning and
Adversarial Image Detection
- Authors: Huma Jamil, Yajing Liu, Turgay Caglar, Christina M. Cole, Nathaniel
Blanchard, Christopher Peterson, Michael Kirby
- Abstract summary: We investigate the potential for ReLU activation patterns (encoded as bit vectors) to aid in understanding and interpreting the behavior of neural networks.
We utilize Representational Dissimilarity Matrices (RDMs) to investigate the coherence of data within the embedding spaces of a deep neural network.
We demonstrate that bit vectors aid in adversarial image detection, again achieving over 95% accuracy in separating adversarial and non-adversarial images.
- Score: 2.960821510561423
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Researchers typically investigate neural network representations by examining
activation outputs for one or more layers of a network. Here, we investigate
the potential for ReLU activation patterns (encoded as bit vectors) to aid in
understanding and interpreting the behavior of neural networks. We utilize
Representational Dissimilarity Matrices (RDMs) to investigate the coherence of
data within the embedding spaces of a deep neural network. From each layer of a
network, we extract and utilize bit vectors to construct similarity scores
between images. From these similarity scores, we build a similarity matrix for
a collection of images drawn from 2 classes. We then apply Fiedler partitioning
to the associated Laplacian matrix to separate the classes. Our results
indicate, through bit vector representations, that the network continues to
refine class detectability with the last ReLU layer achieving better than 95\%
separation accuracy. Additionally, we demonstrate that bit vectors aid in
adversarial image detection, again achieving over 95\% accuracy in separating
adversarial and non-adversarial images using a simple classifier.
Related papers
- Multilayer Multiset Neuronal Networks -- MMNNs [55.2480439325792]
The present work describes multilayer multiset neuronal networks incorporating two or more layers of coincidence similarity neurons.
The work also explores the utilization of counter-prototype points, which are assigned to the image regions to be avoided.
arXiv Detail & Related papers (2023-08-28T12:55:13Z) - Extracting Semantic Knowledge from GANs with Unsupervised Learning [65.32631025780631]
Generative Adversarial Networks (GANs) encode semantics in feature maps in a linearly separable form.
We propose a novel clustering algorithm, named KLiSH, which leverages the linear separability to cluster GAN's features.
KLiSH succeeds in extracting fine-grained semantics of GANs trained on datasets of various objects.
arXiv Detail & Related papers (2022-11-30T03:18:16Z) - Large-Margin Representation Learning for Texture Classification [67.94823375350433]
This paper presents a novel approach combining convolutional layers (CLs) and large-margin metric learning for training supervised models on small datasets for texture classification.
The experimental results on texture and histopathologic image datasets have shown that the proposed approach achieves competitive accuracy with lower computational cost and faster convergence when compared to equivalent CNNs.
arXiv Detail & Related papers (2022-06-17T04:07:45Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Learning Hierarchical Graph Representation for Image Manipulation
Detection [50.04902159383709]
The objective of image manipulation detection is to identify and locate the manipulated regions in the images.
Recent approaches mostly adopt the sophisticated Convolutional Neural Networks (CNNs) to capture the tampering artifacts left in the images.
We propose a hierarchical Graph Convolutional Network (HGCN-Net), which consists of two parallel branches.
arXiv Detail & Related papers (2022-01-15T01:54:25Z) - Similarity and Matching of Neural Network Representations [0.0]
We employ a toolset -- dubbed Dr. Frankenstein -- to analyse the similarity of representations in deep neural networks.
We aim to match the activations on given layers of two trained neural networks by joining them with a stitching layer.
arXiv Detail & Related papers (2021-10-27T17:59:46Z) - Experience feedback using Representation Learning for Few-Shot Object
Detection on Aerial Images [2.8560476609689185]
The performance of our method is assessed on DOTA, a large-scale remote sensing images dataset.
It highlights in particular some intrinsic weaknesses for the few-shot object detection task.
arXiv Detail & Related papers (2021-09-27T13:04:53Z) - HistoTransfer: Understanding Transfer Learning for Histopathology [9.231495418218813]
We compare the performance of features extracted from networks trained on ImageNet and histopathology data.
We investigate if features learned using more complex networks lead to gain in performance.
arXiv Detail & Related papers (2021-06-13T18:55:23Z) - A new approach to descriptors generation for image retrieval by
analyzing activations of deep neural network layers [43.77224853200986]
We consider the problem of descriptors construction for the task of content-based image retrieval using deep neural networks.
It is known that the total number of neurons in the convolutional part of the network is large and the majority of them have little influence on the final classification decision.
We propose a novel algorithm that allows us to extract the most significant neuron activations and utilize this information to construct effective descriptors.
arXiv Detail & Related papers (2020-07-13T18:53:10Z) - CRNet: Cross-Reference Networks for Few-Shot Segmentation [59.85183776573642]
Few-shot segmentation aims to learn a segmentation model that can be generalized to novel classes with only a few training images.
With a cross-reference mechanism, our network can better find the co-occurrent objects in the two images.
Experiments on the PASCAL VOC 2012 dataset show that our network achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-03-24T04:55:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.