iNNformant: Boundary Samples as Telltale Watermarks
- URL: http://arxiv.org/abs/2106.07303v1
- Date: Mon, 14 Jun 2021 11:18:32 GMT
- Title: iNNformant: Boundary Samples as Telltale Watermarks
- Authors: Alexander Schl\"ogl, Tobias Kupek, Rainer B\"ohme
- Abstract summary: We show that it is possible to generate sets of boundary samples which can identify any of four tested microarchitectures.
These sets can be built to not contain any sample with a worse peak signal-to-noise ratio than 70dB.
- Score: 68.8204255655161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Boundary samples are special inputs to artificial neural networks crafted to
identify the execution environment used for inference by the resulting output
label. The paper presents and evaluates algorithms to generate transparent
boundary samples. Transparency refers to a small perceptual distortion of the
host signal (i.e., a natural input sample). For two established image
classifiers, ResNet on FMNIST and CIFAR10, we show that it is possible to
generate sets of boundary samples which can identify any of four tested
microarchitectures. These sets can be built to not contain any sample with a
worse peak signal-to-noise ratio than 70dB. We analyze the relationship between
search complexity and resulting transparency.
Related papers
- Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - Mitigating Noisy Supervision Using Synthetic Samples with Soft Labels [13.314778587751588]
Noisy labels are ubiquitous in real-world datasets, especially in the large-scale ones derived from crowdsourcing and web searching.
It is challenging to train deep neural networks with noisy datasets since the networks are prone to overfitting the noisy labels during training.
We propose a framework that trains the model with new synthetic samples to mitigate the impact of noisy labels.
arXiv Detail & Related papers (2024-06-22T04:49:39Z) - Texture-guided Saliency Distilling for Unsupervised Salient Object
Detection [67.10779270290305]
We propose a novel USOD method to mine rich and accurate saliency knowledge from both easy and hard samples.
Our method achieves state-of-the-art USOD performance on RGB, RGB-D, RGB-T, and video SOD benchmarks.
arXiv Detail & Related papers (2022-07-13T02:01:07Z) - ScatterSample: Diversified Label Sampling for Data Efficient Graph
Neural Network Learning [22.278779277115234]
In some applications where graph neural network (GNN) training is expensive, labeling new instances is expensive.
We develop a data-efficient active sampling framework, ScatterSample, to train GNNs under an active learning setting.
Our experiments on five datasets show that ScatterSample significantly outperforms the other GNN active learning baselines.
arXiv Detail & Related papers (2022-06-09T04:05:02Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z) - Transform consistency for learning with noisy labels [9.029861710944704]
We propose a method to identify clean samples only using one single network.
Clean samples prefer to reach consistent predictions for the original images and the transformed images.
In order to mitigate the negative influence of noisy labels, we design a classification loss by using the off-line hard labels and on-line soft labels.
arXiv Detail & Related papers (2021-03-25T14:33:13Z) - Forensicability of Deep Neural Network Inference Pipelines [68.8204255655161]
We propose methods to infer properties of the execution environment of machine learning pipelines by tracing characteristic numerical deviations in observable outputs.
Results from a series of proof-of-concept experiments give rise to possible forensic applications, such as the identification of the hardware platform used to produce deep neural network predictions.
arXiv Detail & Related papers (2021-02-01T15:41:49Z) - Bridging In- and Out-of-distribution Samples for Their Better
Discriminability [18.84265231678354]
We consider samples lying in the intermediate of the two and use them for training a network.
We generate such samples using multiple image transformations that corrupt inputs in various ways and with different severity levels.
We estimate where the generated samples by a single image transformation lie between ID and OOD using a network trained on clean ID samples.
arXiv Detail & Related papers (2021-01-07T11:34:18Z) - UC-Net: Uncertainty Inspired RGB-D Saliency Detection via Conditional
Variational Autoencoders [81.5490760424213]
We propose the first framework (UCNet) to employ uncertainty for RGB-D saliency detection by learning from the data labeling process.
Inspired by the saliency data labeling process, we propose probabilistic RGB-D saliency detection network.
arXiv Detail & Related papers (2020-04-13T04:12:59Z) - OpenGAN: Open Set Generative Adversarial Networks [16.02382549750862]
We propose an open set GAN architecture (OpenGAN) that is conditioned per-input sample with a feature embedding drawn from a metric space.
We are able to generate samples that are semantically similar to a given source image.
We show that performance can be significantly improved by augmenting the training data with OpenGAN samples on classes that are outside of the GAN training distribution.
arXiv Detail & Related papers (2020-03-18T07:24:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.