FAIRSECO: An Extensible Framework for Impact Measurement of Research Software
- URL: http://arxiv.org/abs/2406.02412v1
- Date: Tue, 4 Jun 2024 15:22:48 GMT
- Title: FAIRSECO: An Extensible Framework for Impact Measurement of Research Software
- Authors: Deekshitha, Siamak Farshidi, Jason Maassen, Rena Bakhshi, Rob van Nieuwpoort, Slinger Jansen,
- Abstract summary: Existing methods for crediting research software and Research Software Engineers have proven to be insufficient.
We have developed FAIRSECO, an open source framework with the objective of assessing the impact of research software in research through the evaluation of various factors.
- Score: 1.549241498953151
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The growing usage of research software in the research community has highlighted the need to recognize and acknowledge the contributions made not only by researchers but also by Research Software Engineers. However, the existing methods for crediting research software and Research Software Engineers have proven to be insufficient. In response, we have developed FAIRSECO, an extensible open source framework with the objective of assessing the impact of research software in research through the evaluation of various factors. The FAIRSECO framework addresses two critical information needs: firstly, it provides potential users of research software with metrics related to software quality and FAIRness. Secondly, the framework provides information for those who wish to measure the success of a project by offering impact data. By exploring the quality and impact of research software, our aim is to ensure that Research Software Engineers receive the recognition they deserve for their valuable contributions.
Related papers
- An Overview and Catalogue of Dependency Challenges in Open Source Software Package Registries [52.23798016734889]
This article provides a catalogue of dependency-related challenges that come with relying on OSS packages or libraries.
The catalogue is based on the scientific literature on empirical research that has been conducted to understand, quantify and overcome these challenges.
arXiv Detail & Related papers (2024-09-27T16:20:20Z) - RSMM: A Framework to Assess Maturity of Research Software Project [1.285353663787249]
This paper introduces RSMM, a framework for evaluating and refining research software management.
RSMM offers a structured pathway for evaluating and refining research software management by categorizing 79 best practices.
Individuals as well as organizations involved in research software development gain a systematic approach to tackling various research software engineering challenges.
arXiv Detail & Related papers (2024-06-03T21:10:05Z) - No Free Lunch: Research Software Testing in Teaching [1.4396109429521227]
This research explores the effects of research software testing integrated into teaching on research software.
In an in-vivo experiment, we integrated the engineering of a test suite for a large-scale network simulation as group projects into a course on software testing at the Blekinge Institute of Technology, Sweden.
We found that the research software benefited from the integration through substantially improved documentation and fewer hardware and software dependencies.
arXiv Detail & Related papers (2024-05-20T11:40:01Z) - Research information in the light of artificial intelligence: quality and data ecologies [0.0]
This paper presents multi- and interdisciplinary approaches for finding the appropriate AI technologies for research information.
Professional research information management (RIM) is becoming increasingly important as an expressly data-driven tool for researchers.
arXiv Detail & Related papers (2024-05-06T16:07:56Z) - SurveyAgent: A Conversational System for Personalized and Efficient Research Survey [50.04283471107001]
This paper introduces SurveyAgent, a novel conversational system designed to provide personalized and efficient research survey assistance to researchers.
SurveyAgent integrates three key modules: Knowledge Management for organizing papers, Recommendation for discovering relevant literature, and Query Answering for engaging with content on a deeper level.
Our evaluation demonstrates SurveyAgent's effectiveness in streamlining research activities, showcasing its capability to facilitate how researchers interact with scientific literature.
arXiv Detail & Related papers (2024-04-09T15:01:51Z) - Using Machine Learning To Identify Software Weaknesses From Software
Requirement Specifications [49.1574468325115]
This research focuses on finding an efficient machine learning algorithm to identify software weaknesses from requirement specifications.
Keywords extracted using latent semantic analysis help map the CWE categories to PROMISE_exp. Naive Bayes, support vector machine (SVM), decision trees, neural network, and convolutional neural network (CNN) algorithms were tested.
arXiv Detail & Related papers (2023-08-10T13:19:10Z) - A Metadata-Based Ecosystem to Improve the FAIRness of Research Software [0.3185506103768896]
The reuse of research software is central to research efficiency and academic exchange.
The DataDesc ecosystem is presented, an approach to describing data models of software interfaces with detailed and machine-actionable metadata.
arXiv Detail & Related papers (2023-06-18T19:01:08Z) - Assessing Scientific Contributions in Data Sharing Spaces [64.16762375635842]
This paper introduces the SCIENCE-index, a blockchain-based metric measuring a researcher's scientific contributions.
To incentivize researchers to share their data, the SCIENCE-index is augmented to include a data-sharing parameter.
Our model is evaluated by comparing the distribution of its output for geographically diverse researchers to that of the h-index.
arXiv Detail & Related papers (2023-03-18T19:17:47Z) - Engaging with Researchers and Raising Awareness of FAIR and Open Science
through the FAIR+ Implementation Survey Tool (FAIRIST) [0.0]
Six years after the seminal paper on FAIR was published, researchers still struggle to understand how to implement FAIR.
The FAIR+ Implementation Survey Tool (FAIRIST) mitigates the problem by integrating research requirements with research proposals in a systematic way.
arXiv Detail & Related papers (2023-01-17T22:38:30Z) - Towards a Fair Comparison and Realistic Design and Evaluation Framework
of Android Malware Detectors [63.75363908696257]
We analyze 10 influential research works on Android malware detection using a common evaluation framework.
We identify five factors that, if not taken into account when creating datasets and designing detectors, significantly affect the trained ML models.
We conclude that the studied ML-based detectors have been evaluated optimistically, which justifies the good published results.
arXiv Detail & Related papers (2022-05-25T08:28:08Z) - Learnings from Frontier Development Lab and SpaceML -- AI Accelerators
for NASA and ESA [57.06643156253045]
Research with AI and ML technologies lives in a variety of settings with often asynchronous goals and timelines.
We perform a case study of the Frontier Development Lab (FDL), an AI accelerator under a public-private partnership from NASA and ESA.
FDL research follows principled practices that are grounded in responsible development, conduct, and dissemination of AI research.
arXiv Detail & Related papers (2020-11-09T21:23:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.