SOCRATES: Towards a Unified Platform for Neural Network Analysis
- URL: http://arxiv.org/abs/2007.11206v2
- Date: Sat, 6 Feb 2021 03:37:56 GMT
- Title: SOCRATES: Towards a Unified Platform for Neural Network Analysis
- Authors: Long H. Pham, Jiaying Li and Jun Sun
- Abstract summary: We aim to build a unified framework for developing techniques to analyze neural networks.
We develop a platform called SOCRATES which supports a standardized format for a variety of neural network models.
Experiment results show that our platform can handle a wide range of networks models and properties.
- Score: 7.318255652722096
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Studies show that neural networks, not unlike traditional programs, are
subject to bugs, e.g., adversarial samples that cause classification errors and
discriminatory instances that demonstrate the lack of fairness. Given that
neural networks are increasingly applied in critical applications (e.g.,
self-driving cars, face recognition systems and personal credit rating
systems), it is desirable that systematic methods are developed to analyze
(e.g., test or verify) neural networks against desirable properties. Recently,
a number of approaches have been developed for analyzing neural networks. These
efforts are however scattered (i.e., each approach tackles some restricted
classes of neural networks against certain particular properties), incomparable
(i.e., each approach has its own assumptions and input format) and thus hard to
apply, reuse or extend. In this project, we aim to build a unified framework
for developing techniques to analyze neural networks. Towards this goal, we
develop a platform called SOCRATES which supports a standardized format for a
variety of neural network models, an assertion language for property
specification as well as multiple neural network analysis algorithms including
two novel ones for falsifying and probabilistic verification of neural network
models. SOCRATES is extensible and thus existing approaches can be easily
integrated. Experiment results show that our platform can handle a wide range
of networks models and properties. More importantly, it provides a platform for
synergistic research on neural network analysis.
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Deception Detection from Linguistic and Physiological Data Streams Using Bimodal Convolutional Neural Networks [19.639533220155965]
This paper explores the application of convolutional neural networks for the purpose of multimodal deception detection.
We use a dataset built by interviewing 104 subjects about two topics, with one truthful and one falsified response from each subject about each topic.
arXiv Detail & Related papers (2023-11-18T02:44:33Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Gaussian Process Surrogate Models for Neural Networks [6.8304779077042515]
In science and engineering, modeling is a methodology used to understand complex systems whose internal processes are opaque.
We construct a class of surrogate models for neural networks using Gaussian processes.
We demonstrate our approach captures existing phenomena related to the spectral bias of neural networks, and then show that our surrogate models can be used to solve practical problems.
arXiv Detail & Related papers (2022-08-11T20:17:02Z) - Consistency of Neural Networks with Regularization [0.0]
This paper proposes the general framework of neural networks with regularization and prove its consistency.
Two types of activation functions: hyperbolic function(Tanh) and rectified linear unit(ReLU) have been taken into consideration.
arXiv Detail & Related papers (2022-06-22T23:33:39Z) - Creating Powerful and Interpretable Models withRegression Networks [2.2049183478692584]
We propose a novel architecture, Regression Networks, which combines the power of neural networks with the understandability of regression analysis.
We demonstrate that the models exceed the state-of-the-art performance of interpretable models on several benchmark datasets.
arXiv Detail & Related papers (2021-07-30T03:37:00Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - Generate and Verify: Semantically Meaningful Formal Analysis of Neural
Network Perception Systems [2.2559617939136505]
Testing remains to evaluate accuracy of neural network perception systems.
We employ neural network verification to prove that a model will always produce estimates within some error bound to the ground truth.
arXiv Detail & Related papers (2020-12-16T23:09:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.