Harvesting Textual and Structured Data from the HAL Publication Repository
- URL: http://arxiv.org/abs/2407.20595v2
- Date: Thu, 27 Feb 2025 19:33:23 GMT
- Title: Harvesting Textual and Structured Data from the HAL Publication Repository
- Authors: Francis Kulumba, Wissam Antoun, Guillaume Vimont, Laurent Romary,
- Abstract summary: HAL (textitHyper Articles en Ligne) is the French national publication repository.<n>We present HALvest, a unique dataset that bridges the gap between citation networks and the full text of HAL-submitted articles.
- Score: 2.2811655242978444
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: HAL (\textit{Hyper Articles en Ligne}) is the French national publication repository, used by most higher education and research organizations for their open science policy. Although it is a rich repository of academic documents, its potential for advanced research has not been fully explored. We present HALvest, a unique dataset that bridges the gap between citation networks and the full text of HAL-submitted articles to help with authorship attribution and verification. This first iteration consists of approximately 700,000 documents, spanning 56 languages across 13 identified domains. We transform articles' metadata into a citation network, producing a heterogeneous graph. This graph includes uniquely identified authors on HAL, as well as all open-access documents and their references. Finally, we mine 14.5 million high-quality sequence pairs from HALvest for contrastive learning purposes. By providing different views of HAL, suited for modern machine learning, we aim to assist practitioners in better analyzing and interpreting research dynamics.
Related papers
- MOLE: Metadata Extraction and Validation in Scientific Papers Using LLMs [54.5729817345543]
MOLE is a framework that automatically extracts metadata attributes from scientific papers covering datasets of languages other than Arabic.<n>Our methodology processes entire documents across multiple input formats and incorporates robust validation mechanisms for consistent output.
arXiv Detail & Related papers (2025-05-26T10:31:26Z) - SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature [80.49349719239584]
We present SciRIFF (Scientific Resource for Instruction-Following and Finetuning), a dataset of 137K instruction-following demonstrations for 54 tasks.
SciRIFF is the first dataset focused on extracting and synthesizing information from research literature across a wide range of scientific fields.
arXiv Detail & Related papers (2024-06-10T21:22:08Z) - DocReLM: Mastering Document Retrieval with Language Model [49.847369507694154]
We demonstrate that by utilizing large language models, a document retrieval system can achieve advanced semantic understanding capabilities.
Our approach involves training the retriever and reranker using domain-specific data generated by large language models.
We use a test set annotated by academic researchers in the fields of quantum physics and computer vision to evaluate our system's performance.
arXiv Detail & Related papers (2024-05-19T06:30:22Z) - KG-CTG: Citation Generation through Knowledge Graph-guided Large Language Models [35.80247519023821]
Citation Text Generation (CTG) is a task in natural language processing (NLP) that aims to produce text that accurately cites or references a cited document within a source document.
This paper presents a framework, and a comparative study to demonstrate the use of Large Language Models (LLMs) for the task of citation generation.
arXiv Detail & Related papers (2024-04-15T13:06:32Z) - Query of CC: Unearthing Large Scale Domain-Specific Knowledge from
Public Corpora [104.16648246740543]
We propose an efficient data collection method based on large language models.
The method bootstraps seed information through a large language model and retrieves related data from public corpora.
It not only collects knowledge-related data for specific domains but unearths the data with potential reasoning procedures.
arXiv Detail & Related papers (2024-01-26T03:38:23Z) - Large Language Models for Generative Information Extraction: A Survey [89.71273968283616]
Large Language Models (LLMs) have demonstrated remarkable capabilities in text understanding and generation.
We present an extensive overview by categorizing these works in terms of various IE subtasks and techniques.
We empirically analyze the most advanced methods and discover the emerging trend of IE tasks with LLMs.
arXiv Detail & Related papers (2023-12-29T14:25:22Z) - GAIA Search: Hugging Face and Pyserini Interoperability for NLP Training
Data Exploration [97.68234051078997]
We discuss how Pyserini can be integrated with the Hugging Face ecosystem of open-source AI libraries and artifacts.
We include a Jupyter Notebook-based walk through the core interoperability features, available on GitHub.
We present GAIA Search - a search engine built following previously laid out principles, giving access to four popular large-scale text collections.
arXiv Detail & Related papers (2023-06-02T12:09:59Z) - CoCon: A Data Set on Combined Contextualized Research Artifact Use [0.0]
CoCon is a large scholarly data set reflecting the combined use of research artifacts in academic publications' full-text.
Our data set comprises 35 k artifacts (data sets, methods, models, and tasks) and 340 k publications.
We formalize a link prediction task for "combined research artifact use prediction" and provide code to utilize analyses of and the development of ML applications on our data.
arXiv Detail & Related papers (2023-03-27T13:29:09Z) - PubGraph: A Large-Scale Scientific Knowledge Graph [11.240833731512609]
PubGraph is a new resource for studying scientific progress that takes the form of a large-scale knowledge graph.
PubGraph is comprehensive and unifies data from various sources, including Wikidata, OpenAlex, and Semantic Scholar.
We create several large-scale benchmarks extracted from PubGraph for the core task of knowledge graph completion.
arXiv Detail & Related papers (2023-02-04T20:03:55Z) - The Semantic Scholar Open Data Platform [79.4493235243312]
Semantic Scholar (S2) is an open data platform and website aimed at accelerating science by helping scholars discover and understand scientific literature.
We combine public and proprietary data sources using state-of-the-art techniques for scholarly PDF content extraction and automatic knowledge graph construction.
The graph includes advanced semantic features such as structurally parsed text, natural language summaries, and vector embeddings.
arXiv Detail & Related papers (2023-01-24T17:13:08Z) - Taxonomy Enrichment with Text and Graph Vector Representations [61.814256012166794]
We address the problem of taxonomy enrichment which aims at adding new words to the existing taxonomy.
We present a new method that allows achieving high results on this task with little effort.
We achieve state-of-the-art results across different datasets and provide an in-depth error analysis of mistakes.
arXiv Detail & Related papers (2022-01-21T09:01:12Z) - Pattern-based Acquisition of Scientific Entities from Scholarly Article
Titles [0.0]
We describe a rule-based approach for the automatic acquisition of scientific entities from scholarly article titles.
We identify a set of lexico-syntactic patterns that are easily recognizable.
A subset of the acquisition algorithm is implemented for article titles in the Computational Linguistics (CL) scholarly domain.
arXiv Detail & Related papers (2021-09-01T05:59:06Z) - Enhancing Scientific Papers Summarization with Citation Graph [78.65955304229863]
We redefine the task of scientific papers summarization by utilizing their citation graph.
We construct a novel scientific papers summarization dataset Semantic Scholar Network (SSN) which contains 141K research papers in different domains.
Our model can achieve competitive performance when compared with the pretrained models.
arXiv Detail & Related papers (2021-04-07T11:13:35Z) - A High-Quality Multilingual Dataset for Structured Documentation
Translation [101.41835967142521]
This paper presents a high-quality multilingual dataset for the documentation domain.
We collect XML-structured parallel text segments from the online documentation for an enterprise software platform.
arXiv Detail & Related papers (2020-06-24T02:08:44Z) - Machine Identification of High Impact Research through Text and Image
Analysis [0.4737991126491218]
We present a system to automatically separate papers with a high from those with a low likelihood of gaining citations.
Our system uses both a visual classifier, useful for surmising a document's overall appearance, and a text classifier, for making content-informed decisions.
arXiv Detail & Related papers (2020-05-20T19:12:24Z) - Two Huge Title and Keyword Generation Corpora of Research Articles [0.0]
We introduce two huge datasets for text summarization (OAGSX) and keyword generation (OAGKX) research.
The data were retrieved from the Open Academic Graph which is a network of research profiles and publications.
We would like to apply topic modeling on the two sets to derive subsets of research articles from more specific disciplines.
arXiv Detail & Related papers (2020-02-11T21:17:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.