Understanding Neural Code Intelligence Through Program Simplification
- URL: http://arxiv.org/abs/2106.03353v1
- Date: Mon, 7 Jun 2021 05:44:29 GMT
- Title: Understanding Neural Code Intelligence Through Program Simplification
- Authors: Md Rafiqul Islam Rabin, Vincent J. Hellendoorn, Mohammad Amin Alipour
- Abstract summary: We propose a model-agnostic approach to identify critical input features for models in code intelligence systems.
Our approach, SIVAND, uses simplification techniques that reduce the size of input programs of a CI model.
We believe that SIVAND's extracted features may help understand neural CI systems' predictions and learned behavior.
- Score: 3.9704927572880253
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A wide range of code intelligence (CI) tools, powered by deep neural
networks, have been developed recently to improve programming productivity and
perform program analysis. To reliably use such tools, developers often need to
reason about the behavior of the underlying models and the factors that affect
them. This is especially challenging for tools backed by deep neural networks.
Various methods have tried to reduce this opacity in the vein of
"transparent/interpretable-AI". However, these approaches are often specific to
a particular set of network architectures, even requiring access to the
network's parameters. This makes them difficult to use for the average
programmer, which hinders the reliable adoption of neural CI systems. In this
paper, we propose a simple, model-agnostic approach to identify critical input
features for models in CI systems, by drawing on software debugging research,
specifically delta debugging. Our approach, SIVAND, uses simplification
techniques that reduce the size of input programs of a CI model while
preserving the predictions of the model. We show that this approach yields
remarkably small outputs and is broadly applicable across many model
architectures and problem domains. We find that the models in our experiments
often rely heavily on just a few syntactic features in input programs. We
believe that SIVAND's extracted features may help understand neural CI systems'
predictions and learned behavior.
Related papers
- DLBacktrace: A Model Agnostic Explainability for any Deep Learning Models [1.747623282473278]
Deep learning models operate as opaque 'black boxes' with limited transparency in their decision-making processes.
This study addresses the pressing need for interpretability in AI systems, emphasizing its role in fostering trust, ensuring accountability, and promoting responsible deployment in mission-critical fields.
We introduce DLBacktrace, an innovative technique developed by the AryaXAI team to illuminate model decisions across a wide array of domains.
arXiv Detail & Related papers (2024-11-19T16:54:30Z) - AI-Aided Kalman Filters [65.35350122917914]
The Kalman filter (KF) and its variants are among the most celebrated algorithms in signal processing.
Recent developments illustrate the possibility of fusing deep neural networks (DNNs) with classic Kalman-type filtering.
This article provides a tutorial-style overview of design approaches for incorporating AI in aiding KF-type algorithms.
arXiv Detail & Related papers (2024-10-16T06:47:53Z) - Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
arXiv Detail & Related papers (2024-02-02T20:33:14Z) - Interpretability of an Interaction Network for identifying $H
\rightarrow b\bar{b}$ jets [4.553120911976256]
In recent times, AI models based on deep neural networks are becoming increasingly popular for many of these applications.
We explore interpretability of AI models by examining an Interaction Network (IN) model designed to identify boosted $Hto bbarb$ jets.
We additionally illustrate the activity of hidden layers within the IN model as Neural Activation Pattern (NAP) diagrams.
arXiv Detail & Related papers (2022-11-23T08:38:52Z) - NeuralUQ: A comprehensive library for uncertainty quantification in
neural differential equations and operators [0.0]
Uncertainty quantification (UQ) in machine learning is currently drawing increasing research interest.
We present an open-source Python library, termed NeuralUQ, for employing UQ methods for SciML in a convenient and structured manner.
arXiv Detail & Related papers (2022-08-25T04:28:18Z) - Gaussian Process Surrogate Models for Neural Networks [6.8304779077042515]
In science and engineering, modeling is a methodology used to understand complex systems whose internal processes are opaque.
We construct a class of surrogate models for neural networks using Gaussian processes.
We demonstrate our approach captures existing phenomena related to the spectral bias of neural networks, and then show that our surrogate models can be used to solve practical problems.
arXiv Detail & Related papers (2022-08-11T20:17:02Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - A comparative study of neural network techniques for automatic software
vulnerability detection [9.443081849443184]
Most commonly used method for detecting software vulnerabilities is static analysis.
Some researchers have proposed to use neural networks that have the ability of automatic feature extraction to improve intelligence of detection.
We have conducted extensive experiments to test the performance of the two most typical neural networks.
arXiv Detail & Related papers (2021-04-29T01:47:30Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.