How to Tell Deep Neural Networks What We Know
- URL: http://arxiv.org/abs/2107.10295v1
- Date: Wed, 21 Jul 2021 18:18:02 GMT
- Title: How to Tell Deep Neural Networks What We Know
- Authors: Tirtharaj Dash, Sharad Chitlangia, Aditya Ahuja, Ashwin Srinivasan
- Abstract summary: This paper examines the inclusion of domain-knowledge by means of changes to: the input, the loss-function, and the architecture of deep networks.
In each category, we describe techniques that have been shown to yield significant changes in network performance.
- Score: 2.2186394337073527
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a short survey of ways in which existing scientific knowledge are
included when constructing models with neural networks. The inclusion of
domain-knowledge is of special interest not just to constructing scientific
assistants, but also, many other areas that involve understanding data using
human-machine collaboration. In many such instances, machine-based model
construction may benefit significantly from being provided with human-knowledge
of the domain encoded in a sufficiently precise form. This paper examines the
inclusion of domain-knowledge by means of changes to: the input, the
loss-function, and the architecture of deep networks. The categorisation is for
ease of exposition: in practice we expect a combination of such changes will be
employed. In each category, we describe techniques that have been shown to
yield significant changes in network performance.
Related papers
- Pruning neural network models for gene regulatory dynamics using data and domain knowledge [24.670514977455202]
We propose DASH, a framework that guides network pruning by using domain-specific structural information in model fitting.
We show that DASH, using knowledge about gene interaction partners within the putative regulatory network, outperforms general pruning methods by a large margin.
arXiv Detail & Related papers (2024-03-05T23:02:55Z) - Visualization Of Class Activation Maps To Explain AI Classification Of
Network Packet Captures [0.0]
The number of connections and the addition of new applications in our networks causes a vast amount of log data.
Deep learning methods provide both feature extraction and classification from data in a single system.
We present a visual interactive tool that combines the classification of network data with an explanation technique to form an interface between experts, algorithms, and data.
arXiv Detail & Related papers (2022-09-05T16:34:43Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Incorporation of Deep Neural Network & Reinforcement Learning with
Domain Knowledge [0.0]
We present a study of the manners by which Domain information has been incorporated when building models with Neural Networks.
Integrating space data is uniquely important to the development of Knowledge understanding model, as well as other fields that aid in understanding information by utilizing the human-machine interface and Reinforcement Learning.
arXiv Detail & Related papers (2021-07-29T17:29:02Z) - A Comprehensive Survey on Community Detection with Deep Learning [93.40332347374712]
A community reveals the features and connections of its members that are different from those in other communities in a network.
This survey devises and proposes a new taxonomy covering different categories of the state-of-the-art methods.
The main category, i.e., deep neural networks, is further divided into convolutional networks, graph attention networks, generative adversarial networks and autoencoders.
arXiv Detail & Related papers (2021-05-26T14:37:07Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - Incorporating Domain Knowledge into Deep Neural Networks [2.2186394337073527]
The inclusion of domain-knowledge is of special interest not just to constructing scientific assistants, but also to many other areas that involve understanding data using human-machine collaboration.
This paper examines two broad approaches to encode such knowledge--as logical and numerical constraints--and describes techniques and results obtained in several sub-categories under each of these approaches.
arXiv Detail & Related papers (2021-02-27T10:39:43Z) - A Survey of Community Detection Approaches: From Statistical Modeling to
Deep Learning [95.27249880156256]
We develop and present a unified architecture of network community-finding methods.
We introduce a new taxonomy that divides the existing methods into two categories, namely probabilistic graphical model and deep learning.
We conclude with discussions of the challenges of the field and suggestions of possible directions for future research.
arXiv Detail & Related papers (2021-01-03T02:32:45Z) - NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural
Network Synthesis [53.106414896248246]
We present a framework that allows analysts to effectively build the solution sub-graph space and guide the network search by injecting their domain knowledge.
Applying this technique in an iterative manner allows analysts to converge to the best performing neural network architecture for a given application.
arXiv Detail & Related papers (2020-09-28T01:48:45Z) - Malicious Network Traffic Detection via Deep Learning: An Information
Theoretic View [0.0]
We study how homeomorphism affects learned representation of a malware traffic dataset.
Our results suggest that although the details of learned representations and the specific coordinate system defined over the manifold of all parameters differ slightly, the functional approximations are the same.
arXiv Detail & Related papers (2020-09-16T15:37:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.