Nanopublication-Based Semantic Publishing and Reviewing: A Field Study
with Formalization Papers
- URL: http://arxiv.org/abs/2203.01608v1
- Date: Thu, 3 Mar 2022 10:04:10 GMT
- Title: Nanopublication-Based Semantic Publishing and Reviewing: A Field Study
with Formalization Papers
- Authors: Cristina-Iulia Bucur and Tobias Kuhn and Davide Ceolin and Jacco van
Ossenbruggen
- Abstract summary: We use the concept and technology of nanopublications for this endeavor.
We represent not just the submissions and final papers in this RDF-based format,but also the whole process in between.
We received 15 submissions from 18 authors,who then went through the whole publication process leading to the publication of their contributions in the special issue.
- Score: 0.5735035463793008
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the rapidly increasing amount of scientific literature,it is getting
continuously more difficult for researchers in different disciplines to be
updated with the recent findings in their field of study.Processing scientific
articles in an automated fashion has been proposed as a solution to this
problem,but the accuracy of such processing remains very poor for extraction
tasks beyond the basic ones.Few approaches have tried to change how we publish
scientific results in the first place,by making articles machine-interpretable
by expressing them with formal semantics from the start.In the work presented
here,we set out to demonstrate that we can formally publish high-level
scientific claims in formal logic,and publish the results in a special issue of
an existing journal.We use the concept and technology of nanopublications for
this endeavor,and represent not just the submissions and final papers in this
RDF-based format,but also the whole process in between,including
reviews,responses,and decisions.We do this by performing a field study with
what we call formalization papers,which contribute a novel formalization of a
previously published claim.We received 15 submissions from 18 authors,who then
went through the whole publication process leading to the publication of their
contributions in the special issue.Our evaluation shows the technical and
practical feasibility of our approach.The participating authors mostly showed
high levels of interest and confidence,and mostly experienced the process as
not very difficult,despite the technical nature of the current user
interfaces.We believe that these results indicate that it is possible to
publish scientific results from different fields with machine-interpretable
semantics from the start,which in turn opens countless possibilities to
radically improve in the future the effectiveness and efficiency of the
scientific endeavor as a whole.
Related papers
- RelevAI-Reviewer: A Benchmark on AI Reviewers for Survey Paper Relevance [0.8089605035945486]
We propose RelevAI-Reviewer, an automatic system that conceptualizes the task of survey paper review as a classification problem.
We introduce a novel dataset comprised of 25,164 instances. Each instance contains one prompt and four candidate papers, each varying in relevance to the prompt.
We develop a machine learning (ML) model capable of determining the relevance of each paper and identifying the most pertinent one.
arXiv Detail & Related papers (2024-06-13T06:42:32Z) - MASSW: A New Dataset and Benchmark Tasks for AI-Assisted Scientific Workflows [58.56005277371235]
We introduce MASSW, a comprehensive text dataset on Multi-Aspect Summarization of ScientificAspects.
MASSW includes more than 152,000 peer-reviewed publications from 17 leading computer science conferences spanning the past 50 years.
We demonstrate the utility of MASSW through multiple novel machine-learning tasks that can be benchmarked using this new dataset.
arXiv Detail & Related papers (2024-06-10T15:19:09Z) - What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - Repeatability, Reproducibility, Replicability, Reusability (4R) in
Journals' Policies and Software/Data Management in Scientific Publications: A
Survey, Discussion, and Perspectives [1.446375009535228]
We have found a large gap between the citation-oriented practices, journal policies, recommendations, artifact Description/Evaluation guidelines, submission guides, technological evolution.
The relationship between authors and scientific journals in their mutual efforts to jointly improve scientific results is analyzed.
We propose recommendations for the journal policies, as well as a unified and standardized Reproducibility Guide for the submission of scientific articles for authors.
arXiv Detail & Related papers (2023-12-18T09:02:28Z) - The Semantic Reader Project: Augmenting Scholarly Documents through
AI-Powered Interactive Reading Interfaces [54.2590226904332]
We describe the Semantic Reader Project, a effort across multiple institutions to explore automatic creation of dynamic reading interfaces for research papers.
Ten prototype interfaces have been developed and more than 300 participants and real-world users have shown improved reading experiences.
We structure this paper around challenges scholars and the public face when reading research papers.
arXiv Detail & Related papers (2023-03-25T02:47:09Z) - Cracking Double-Blind Review: Authorship Attribution with Deep Learning [43.483063713471935]
We propose a transformer-based, neural-network architecture to attribute an anonymous manuscript to an author.
We leverage all research papers publicly available on arXiv amounting to over 2 million manuscripts.
Our method achieves an unprecedented authorship attribution accuracy, where up to 73% of papers are attributed correctly.
arXiv Detail & Related papers (2022-11-14T15:50:24Z) - Revise and Resubmit: An Intertextual Model of Text-based Collaboration
in Peer Review [52.359007622096684]
Peer review is a key component of the publishing process in most fields of science.
Existing NLP studies focus on the analysis of individual texts.
editorial assistance often requires modeling interactions between pairs of texts.
arXiv Detail & Related papers (2022-04-22T16:39:38Z) - CitationIE: Leveraging the Citation Graph for Scientific Information
Extraction [89.33938657493765]
We use the citation graph of referential links between citing and cited papers.
We observe a sizable improvement in end-to-end information extraction over the state-of-the-art.
arXiv Detail & Related papers (2021-06-03T03:00:12Z) - Can We Automate Scientific Reviewing? [89.50052670307434]
We discuss the possibility of using state-of-the-art natural language processing (NLP) models to generate first-pass peer reviews for scientific papers.
We collect a dataset of papers in the machine learning domain, annotate them with different aspects of content covered in each review, and train targeted summarization models that take in papers to generate reviews.
Comprehensive experimental results show that system-generated reviews tend to touch upon more aspects of the paper than human-written reviews.
arXiv Detail & Related papers (2021-01-30T07:16:53Z) - Topic Space Trajectories: A case study on machine learning literature [0.0]
We present topic space trajectories, a structure that allows for the comprehensible tracking of research topics.
We show the applicability of our approach on a publication corpus spanning 50 years of machine learning research from 32 publication venues.
Our novel analysis method may be employed for paper classification, for the prediction of future research topics, and for the recommendation of fitting conferences and journals for submitting unpublished work.
arXiv Detail & Related papers (2020-10-23T10:53:42Z) - Automatic generation of reviews of scientific papers [1.1999555634662633]
We present a method for the automatic generation of a review paper corresponding to a user-defined query.
The first part identifies key papers in the area by their bibliometric parameters, such as a graph of co-citations.
The second stage uses a BERT based architecture that we train on existing reviews for extractive summarization of these key papers.
arXiv Detail & Related papers (2020-10-08T17:47:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.