An Exact Poly-Time Membership-Queries Algorithm for Extraction a
three-Layer ReLU Network
- URL: http://arxiv.org/abs/2105.09673v1
- Date: Thu, 20 May 2021 11:24:08 GMT
- Title: An Exact Poly-Time Membership-Queries Algorithm for Extraction a
three-Layer ReLU Network
- Authors: Amit Daniely and Elad Granot
- Abstract summary: As machine learning increasingly becomes more prevalent in our everyday life, many organizations offer neural-networks based services as a black-box.
The reasons for hiding a learning model may vary: e.g., preventing copying of its behavior or keeping back an adversarial from reverse-engineering its mechanism.
In this work, we show a neural-network algorithm that uses a number of queries to mimic precisely the behavior of a three-layer network that uses ReLU activation.
- Score: 38.91075212630913
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As machine learning increasingly becomes more prevalent in our everyday life,
many organizations offer neural-networks based services as a black-box. The
reasons for hiding a learning model may vary: e.g., preventing copying of its
behavior or keeping back an adversarial from reverse-engineering its mechanism
and revealing sensitive information about its training data.
However, even as a black-box, some information can still be discovered by
specific queries. In this work, we show a polynomial-time algorithm that uses a
polynomial number of queries to mimic precisely the behavior of a three-layer
neural network that uses ReLU activation.
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - Measures of Information Reflect Memorization Patterns [53.71420125627608]
We show that the diversity in the activation patterns of different neurons is reflective of model generalization and memorization.
Importantly, we discover that information organization points to the two forms of memorization, even for neural activations computed on unlabelled in-distribution examples.
arXiv Detail & Related papers (2022-10-17T20:15:24Z) - Long Short-term Cognitive Networks [2.2748974006378933]
We present a recurrent neural system named Long Short-term Cognitive Networks (LSTCNs) as a generalisation of the Short-term Cognitive Network (STCN) model.
Our neural system reports small forecasting errors while being up to thousands of times faster than state-of-the-art recurrent models.
arXiv Detail & Related papers (2021-06-30T17:42:09Z) - On the Post-hoc Explainability of Deep Echo State Networks for Time
Series Forecasting, Image and Video Classification [63.716247731036745]
echo state networks have attracted many stares through time, mainly due to the simplicity and computational efficiency of their learning algorithm.
This work addresses this issue by conducting an explainability study of Echo State Networks when applied to learning tasks with time series, image and video data.
Specifically, the study proposes three different techniques capable of eliciting understandable information about the knowledge grasped by these recurrent models.
arXiv Detail & Related papers (2021-02-17T08:56:33Z) - Exploring Flip Flop memories and beyond: training recurrent neural
networks with key insights [0.0]
We study the implementation of a temporal processing task, specifically a 3-bit Flip Flop memory.
The obtained networks are meticulously analyzed to elucidate dynamics, aided by an array of visualization and analysis tools.
arXiv Detail & Related papers (2020-10-15T16:25:29Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.