Outside the Sandbox: A Study of Input/Output Methods in Java
- URL: http://arxiv.org/abs/2306.11882v1
- Date: Tue, 20 Jun 2023 20:54:02 GMT
- Title: Outside the Sandbox: A Study of Input/Output Methods in Java
- Authors: Mat\'u\v{s} Sul\'ir, Sergej Chodarev, Milan Nos\'a\v{l}
- Abstract summary: We manually categorized 1435 native methods in a Java Standard Edition distribution into non-I/O and I/O-related methods.
Results showed that 21% of the executed methods directly or indirectly called an I/O native.
We conclude that I/O is not a viable option for tool designers and suggest the integration of I/O-related metadata with source code.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Programming languages often demarcate the internal sandbox, consisting of
entities such as objects and variables, from the outside world, e.g., files or
network. Although communication with the external world poses fundamental
challenges for live programming, reversible debugging, testing, and program
analysis in general, studies about this phenomenon are rare. In this paper, we
present a preliminary empirical study about the prevalence of input/output
(I/O) method usage in Java. We manually categorized 1435 native methods in a
Java Standard Edition distribution into non-I/O and I/O-related methods, which
were further classified into areas such as desktop or file-related ones.
According to the static analysis of a call graph for 798 projects, about 57% of
methods potentially call I/O natives. The results of dynamic analysis on 16
benchmarks showed that 21% of the executed methods directly or indirectly
called an I/O native. We conclude that neglecting I/O is not a viable option
for tool designers and suggest the integration of I/O-related metadata with
source code to facilitate their querying.
Related papers
- Embracing Objects Over Statics: An Analysis of Method Preferences in Open Source Java Frameworks [0.0]
This study scrutinizes the runtime behavior of 28 open-source Java frameworks using the YourKit profiler.
Contrary to expectations, our findings reveal a predominant use of instance methods and constructors over static methods.
arXiv Detail & Related papers (2024-10-08T02:30:20Z) - Icing on the Cake: Automatic Code Summarization at Ericsson [4.145468342589189]
We evaluate the performance of an approach called Automatic Semantic Augmentation of Prompts (ASAP)
We compare the performance of four simpler approaches that do not require static program analysis, information retrieval, or the presence of exemplars.
arXiv Detail & Related papers (2024-08-19T06:49:04Z) - Theoretically Achieving Continuous Representation of Oriented Bounding Boxes [64.15627958879053]
This paper endeavors to completely solve the issue of discontinuity in Oriented Bounding Box representation.
We propose a novel representation method called Continuous OBB (COBB) which can be readily integrated into existing detectors.
For fairness and transparency of experiments, we have developed a modularized benchmark based on the open-source deep learning framework Jittor's detection toolbox JDet for OOD evaluation.
arXiv Detail & Related papers (2024-02-29T09:27:40Z) - Symbol-Specific Sparsification of Interprocedural Distributive
Environment Problems [3.9777369380822956]
This paper presents Sparse IDE, a framework that realizes sparsification for any static analysis that fits the Interprocedural Distributive Environment (IDE) framework.
We design, implement and evaluate a linear constant propagation analysis client on top of SparseHeros.
arXiv Detail & Related papers (2024-01-26T12:31:30Z) - Open World Object Detection in the Era of Foundation Models [53.683963161370585]
We introduce a new benchmark that includes five real-world application-driven datasets.
We introduce a novel method, Foundation Object detection Model for the Open world, or FOMO, which identifies unknown objects based on their shared attributes with the base known objects.
arXiv Detail & Related papers (2023-12-10T03:56:06Z) - Generative Judge for Evaluating Alignment [84.09815387884753]
We propose a generative judge with 13B parameters, Auto-J, designed to address these challenges.
Our model is trained on user queries and LLM-generated responses under massive real-world scenarios.
Experimentally, Auto-J outperforms a series of strong competitors, including both open-source and closed-source models.
arXiv Detail & Related papers (2023-10-09T07:27:15Z) - A Language Model of Java Methods with Train/Test Deduplication [5.529795221640365]
This tool demonstration presents a research toolkit for a language model of Java source code.
The target audience includes researchers studying problems at the granularity level of subroutines, statements, or variables in Java.
arXiv Detail & Related papers (2023-05-15T00:22:02Z) - SAT-Based Extraction of Behavioural Models for Java Libraries with
Collections [0.087024326813104]
Behavioural models are a valuable tool for software verification, testing, monitoring, publishing etc.
They are rarely provided by the software developers and have to be extracted either from the source or from the compiled code.
Most of these approaches rely on the analysis of the compiled bytecode.
We are looking to extract behavioural models in the form of Finite State Machines (FSMs) from the Java source code to ensure that the obtained FSMs can be easily understood by the software developers.
arXiv Detail & Related papers (2022-05-30T17:27:13Z) - Combining Feature and Instance Attribution to Detect Artifacts [62.63504976810927]
We propose methods to facilitate identification of training data artifacts.
We show that this proposed training-feature attribution approach can be used to uncover artifacts in training data.
We execute a small user study to evaluate whether these methods are useful to NLP researchers in practice.
arXiv Detail & Related papers (2021-07-01T09:26:13Z) - SegmentMeIfYouCan: A Benchmark for Anomaly Segmentation [111.61261419566908]
Deep neural networks (DNNs) are usually trained on a closed set of semantic classes.
They are ill-equipped to handle previously-unseen objects.
detecting and localizing such objects is crucial for safety-critical applications such as perception for automated driving.
arXiv Detail & Related papers (2021-04-30T07:58:19Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.