Corpus Christi: Establishing Replicability when Sharing the Bread is Not Allowed
- URL: http://arxiv.org/abs/2404.11977v1
- Date: Thu, 18 Apr 2024 08:14:40 GMT
- Title: Corpus Christi: Establishing Replicability when Sharing the Bread is Not Allowed
- Authors: René Helmke, Elmar Padilla, Nils Aschenbruck,
- Abstract summary: We identify binary analysis challenges that significantly impact corpus creation.
We use them to derive a framework of key corpus requirements that nurture the scientific goals of replicability and representativeness.
We apply the framework to 44 top tier papers and collect 704 data points to show that there is currently no common ground on corpus creation.
- Score: 1.1101390076342181
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we provide practical tools to improve the scientific soundness of firmware corpora beyond the state of the art. We identify binary analysis challenges that significantly impact corpus creation. We use them to derive a framework of key corpus requirements that nurture the scientific goals of replicability and representativeness. We apply the framework to 44 top tier papers and collect 704 data points to show that there is currently no common ground on corpus creation. We discover in otherwise excellent work, that incomplete documentation and inflated corpus sizes blur visions on representativeness and hinder replicability. Our results show that the strict framework provides useful and practical guidelines that can identify miniscule step stones in corpus creation with significant impact on soundness. Finally, we show that it is possible to meet all requirements: We provide a new corpus called LFwC. It is designed for large-scale static analyses on Linux-based firmware and consists of 10,913 high-quality images, covering 2,365 network appliances. We share rich meta data and scripts for replicability with the community. We verify unpacking, perform deduplication, identify contents, and provide bug ground truth. We identify ISAs and Linux kernels. All samples can be unpacked with the open source tool FACT.
Related papers
- CorpusBrain++: A Continual Generative Pre-Training Framework for
Knowledge-Intensive Language Tasks [111.13988772503511]
Knowledge-intensive language tasks (KILTs) typically require retrieving relevant documents from trustworthy corpora, e.g., Wikipedia, to produce specific answers.
Very recently, a pre-trained generative retrieval model for KILTs, named CorpusBrain, was proposed and reached new state-of-the-art retrieval performance.
arXiv Detail & Related papers (2024-02-26T17:35:44Z) - GraphKD: Exploring Knowledge Distillation Towards Document Object
Detection with Structured Graph Creation [14.511401955827875]
Object detection in documents is a key step to automate the structural elements identification process.
We present a graph-based knowledge distillation framework to correctly identify and localize the document objects in a document image.
arXiv Detail & Related papers (2024-02-17T23:08:32Z) - What's In My Big Data? [67.04525616289949]
We propose What's In My Big Data? (WIMBD), a platform and a set of sixteen analyses that allow us to reveal and compare the contents of large text corpora.
WIMBD builds on two basic capabilities -- count and search -- at scale, which allows us to analyze more than 35 terabytes on a standard compute node.
Our analysis uncovers several surprising and previously undocumented findings about these corpora, including the high prevalence of duplicate, synthetic, and low-quality content.
arXiv Detail & Related papers (2023-10-31T17:59:38Z) - Preserving Knowledge Invariance: Rethinking Robustness Evaluation of
Open Information Extraction [50.62245481416744]
We present the first benchmark that simulates the evaluation of open information extraction models in the real world.
We design and annotate a large-scale testbed in which each example is a knowledge-invariant clique.
By further elaborating the robustness metric, a model is judged to be robust if its performance is consistently accurate on the overall cliques.
arXiv Detail & Related papers (2023-05-23T12:05:09Z) - Precise Zero-Shot Dense Retrieval without Relevance Labels [60.457378374671656]
Hypothetical Document Embeddings(HyDE) is a zero-shot dense retrieval system.
We show that HyDE significantly outperforms the state-of-the-art unsupervised dense retriever Contriever.
arXiv Detail & Related papers (2022-12-20T18:09:52Z) - Noise-Robust De-Duplication at Scale [4.499833362998488]
This study uses the unique timeliness of historical news wires to create a 27,210 document dataset.
We develop and evaluate a range of de-duplication methods, including hashing and N-gram overlap.
We show that the bi-encoder scales well, de-duplicating a 10 million article corpus on a single GPU card in a matter of hours.
arXiv Detail & Related papers (2022-10-09T13:30:42Z) - One-shot Key Information Extraction from Document with Deep Partial
Graph Matching [60.48651298832829]
Key Information Extraction (KIE) from documents improves efficiency, productivity, and security in many industrial scenarios.
Existing supervised learning methods for the KIE task need to feed a large number of labeled samples and learn separate models for different types of documents.
We propose a deep end-to-end trainable network for one-shot KIE using partial graph matching.
arXiv Detail & Related papers (2021-09-26T07:45:53Z) - CARAFE++: Unified Content-Aware ReAssembly of FEatures [132.49582482421246]
We propose unified Content-Aware ReAssembly of FEatures (CARAFE++), a universal, lightweight and highly effective operator to fulfill this goal.
CARAFE++ generates adaptive kernels on-the-fly to enable instance-specific content-aware handling.
It shows consistent and substantial gains across all the tasks with negligible computational overhead.
arXiv Detail & Related papers (2020-12-07T07:34:57Z) - Learning from similarity and information extraction from structured
documents [0.0]
The aim is to improve micro F1 of per-word classification on a huge real-world document dataset.
Results confirm that all proposed architecture parts are all required to beat the previous results.
The best model improves the previous state-of-the-art results by an 8.25 gain in F1 score.
arXiv Detail & Related papers (2020-10-17T21:34:52Z) - Know thy corpus! Robust methods for digital curation of Web corpora [0.0]
This paper proposes a novel framework for digital curation of Web corpora.
It provides robust estimation of their parameters, such as their composition and the lexicon.
arXiv Detail & Related papers (2020-03-13T17:21:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.