Rare-Event Simulation for Neural Network and Random Forest Predictors
- URL: http://arxiv.org/abs/2010.04890v1
- Date: Sat, 10 Oct 2020 03:27:09 GMT
- Title: Rare-Event Simulation for Neural Network and Random Forest Predictors
- Authors: Yuanlu Bai, Zhiyuan Huang, Henry Lam, Ding Zhao
- Abstract summary: We study rare-event simulation for a class of problems where the target hitting sets of interest are defined via modern machine learning tools.
This problem is motivated from fast emerging studies on the safety evaluation of intelligent systems.
- Score: 16.701364984106092
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study rare-event simulation for a class of problems where the target
hitting sets of interest are defined via modern machine learning tools such as
neural networks and random forests. This problem is motivated from fast
emerging studies on the safety evaluation of intelligent systems, robustness
quantification of learning models, and other potential applications to
large-scale simulation in which machine learning tools can be used to
approximate complex rare-event set boundaries. We investigate an importance
sampling scheme that integrates the dominating point machinery in large
deviations and sequential mixed integer programming to locate the underlying
dominating points. Our approach works for a range of neural network
architectures including fully connected layers, rectified linear units,
normalization, pooling and convolutional layers, and random forests built from
standard decision trees. We provide efficiency guarantees and numerical
demonstration of our approach using a classification model in the UCI Machine
Learning Repository.
Related papers
- Explainable AI for Comparative Analysis of Intrusion Detection Models [20.683181384051395]
This research analyzes various machine learning models to the tasks of binary and multi-class classification for intrusion detection from network traffic.
We trained all models to the accuracy of 90% on the UNSW-NB15 dataset.
We also discover that Random Forest provides the best performance in terms of accuracy, time efficiency and robustness.
arXiv Detail & Related papers (2024-06-14T03:11:01Z) - Distributionally Robust Statistical Verification with Imprecise Neural
Networks [4.094049541486327]
A particularly challenging problem in AI safety is providing guarantees on the behavior of high-dimensional autonomous systems.
This paper proposes a novel approach based on a combination of active learning, uncertainty quantification, and neural network verification.
arXiv Detail & Related papers (2023-08-28T18:06:24Z) - Machine Learning to detect cyber-attacks and discriminating the types of
power system disturbances [0.0]
This research proposes a machine learning-based attack detection model for power systems, specifically targeting smart grids.
By utilizing data and logs collected from Phasor Measuring Devices (PMUs), the model aims to learn system behaviors and effectively identify potential security boundaries.
arXiv Detail & Related papers (2023-07-06T22:32:06Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - The emergence of a concept in shallow neural networks [0.0]
We consider restricted Boltzmann machine (RBMs) trained over an unstructured dataset made of blurred copies of definite but unavailable archetypes''
We show that there exists a critical sample size beyond which the RBM can learn archetypes.
arXiv Detail & Related papers (2021-09-01T15:56:38Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - Neural Complexity Measures [96.06344259626127]
We propose Neural Complexity (NC), a meta-learning framework for predicting generalization.
Our model learns a scalar complexity measure through interactions with many heterogeneous tasks in a data-driven way.
arXiv Detail & Related papers (2020-08-07T02:12:10Z) - Zero-Shot Reinforcement Learning with Deep Attention Convolutional
Neural Networks [12.282277258055542]
We show that a deep attention convolutional neural network (DACNN) with specific visual sensor configuration performs as well as training on a dataset with high domain and parameter variation at lower computational complexity.
Our new architecture adapts perception with respect to the control objective, resulting in zero-shot learning without pre-training a perception network.
arXiv Detail & Related papers (2020-01-02T19:41:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.