Semantic Autoencoder and Its Potential Usage for Adversarial Attack
- URL: http://arxiv.org/abs/2205.15592v1
- Date: Tue, 31 May 2022 08:10:07 GMT
- Title: Semantic Autoencoder and Its Potential Usage for Adversarial Attack
- Authors: Yurui Ming, Cuihuan Du, and Chin-Teng Lin
- Abstract summary: We propose an enhanced autoencoder architecture named semantic autoencoder.
We consider adversarial attacks to learning algorithms that rely on the latent representation obtained via autoencoders.
- Score: 25.315265827962875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autoencoder can give rise to an appropriate latent representation of the
input data, however, the representation which is solely based on the intrinsic
property of the input data, is usually inferior to express some semantic
information. A typical case is the potential incapability of forming a clear
boundary upon clustering of these representations. By encoding the latent
representation that not only depends on the content of the input data, but also
the semantic of the input data, such as label information, we propose an
enhanced autoencoder architecture named semantic autoencoder. Experiments of
representation distribution via t-SNE shows a clear distinction between these
two types of encoders and confirm the supremacy of the semantic one, whilst the
decoded samples of these two types of autoencoders exhibit faint dissimilarity
either objectively or subjectively. Based on this observation, we consider
adversarial attacks to learning algorithms that rely on the latent
representation obtained via autoencoders. It turns out that latent contents of
adversarial samples constructed from semantic encoder with deliberate wrong
label information exhibit different distribution compared with that of the
original input data, while both of these samples manifest very marginal
difference. This new way of attack set up by our work is worthy of attention
due to the necessity to secure the widespread deep learning applications.
Related papers
- Sample what you cant compress [6.24979299238534]
We show how to learn a continuous encoder and decoder under a diffusion-based loss.
This approach yields better reconstruction quality as compared to GAN-based autoencoders.
We also show that the resulting representation is easier to model with a latent diffusion model as compared to the representation obtained from a state-of-the-art GAN-based loss.
arXiv Detail & Related papers (2024-09-04T08:42:42Z) - Attribute-Aware Deep Hashing with Self-Consistency for Large-Scale
Fine-Grained Image Retrieval [65.43522019468976]
We propose attribute-aware hashing networks with self-consistency for generating attribute-aware hash codes.
We develop an encoder-decoder structure network of a reconstruction task to unsupervisedly distill high-level attribute-specific vectors.
Our models are equipped with a feature decorrelation constraint upon these attribute vectors to strengthen their representative abilities.
arXiv Detail & Related papers (2023-11-21T08:20:38Z) - Semi-FairVAE: Semi-supervised Fair Representation Learning with
Adversarial Variational Autoencoder [92.67156911466397]
We propose a semi-supervised fair representation learning approach based on adversarial variational autoencoder.
We use a bias-aware model to capture inherent bias information on sensitive attribute.
We also use a bias-free model to learn debiased fair representations by using adversarial learning to remove bias information from them.
arXiv Detail & Related papers (2022-04-01T15:57:47Z) - Label Semantics for Few Shot Named Entity Recognition [68.01364012546402]
We study the problem of few shot learning for named entity recognition.
We leverage the semantic information in the names of the labels as a way of giving the model additional signal and enriched priors.
Our model learns to match the representations of named entities computed by the first encoder with label representations computed by the second encoder.
arXiv Detail & Related papers (2022-03-16T23:21:05Z) - Diffusion-Based Representation Learning [65.55681678004038]
We augment the denoising score matching framework to enable representation learning without any supervised signal.
In contrast, the introduced diffusion-based representation learning relies on a new formulation of the denoising score matching objective.
Using the same approach, we propose to learn an infinite-dimensional latent code that achieves improvements of state-of-the-art models on semi-supervised image classification.
arXiv Detail & Related papers (2021-05-29T09:26:02Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Semi-supervised Autoencoding Projective Dependency Parsing [33.73819721400118]
We describe two end-to-end autoencoding models for semi-supervised graph-based projective dependency parsing.
Both models consist of two parts: an enhanced by deep neural networks (DNN) that can utilize the contextual information to encode the input into latent variables, and a decoder which is a generative model able to reconstruct the input.
arXiv Detail & Related papers (2020-11-02T03:21:39Z) - Self-Supervised Bernoulli Autoencoders for Semi-Supervised Hashing [1.8899300124593648]
This paper investigates the robustness of hashing methods based on variational autoencoders to the lack of supervision.
We propose a novel supervision method in which the model uses its label distribution predictions to implement the pairwise objective.
Our experiments show that both methods can significantly increase the hash codes' quality.
arXiv Detail & Related papers (2020-07-17T07:47:10Z) - Evidence-Aware Inferential Text Generation with Vector Quantised
Variational AutoEncoder [104.25716317141321]
We propose an approach that automatically finds evidence for an event from a large text corpus, and leverages the evidence to guide the generation of inferential texts.
Our approach provides state-of-the-art performance on both Event2Mind and ATOMIC datasets.
arXiv Detail & Related papers (2020-06-15T02:59:52Z) - Semi-Supervised Semantic Segmentation with Cross-Consistency Training [8.894935073145252]
We present a novel cross-consistency based semi-supervised approach for semantic segmentation.
Our method achieves state-of-the-art results in several datasets.
arXiv Detail & Related papers (2020-03-19T20:10:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.