Implementation of Departmental and Periodical Examination Analyzer
System
- URL: http://arxiv.org/abs/2103.05252v1
- Date: Tue, 9 Mar 2021 06:47:20 GMT
- Title: Implementation of Departmental and Periodical Examination Analyzer
System
- Authors: Julius G. Garcia, Connie C. Aunario
- Abstract summary: The Departmental and Periodical Examination System was developed using Visual Basic language.
The system was evaluated by a group of students, teachers, school administrators and information technology professionals.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Administering examinations both in public and private academic institutions
can be tedious and unmanageable. The multiplicity of problems affecting the
conduct of departmental and periodical examination can be greatly reduced by
automating the examination process. The purpose of this action research is to
provide an alternative technical solution in administering test through the use
of Examination System. This software application can facilitate a plenitude of
examinees for different subjects that implements a random questioning technique
and can generate item analysis and test results. The Departmental and
Periodical Examination System was developed using Visual Basic language. The
software modules were tested using the functional testing method. Using the
criteria and metrics of ISO 9126 software quality model, the system was
evaluated by a group of students, teachers, school administrators and
information technology professionals and has received an overall weighted mean
of 4.56585 with an excellent descriptive rating. Therefore, the performance of
the application software provides solution that can surmount the gargantuan
problems of test administration and post-examination issues and performs all
the operations specified in the objectives.
Related papers
- Auditing the Use of Language Models to Guide Hiring Decisions [2.949890760187898]
Regulatory efforts to protect against algorithmic bias have taken on increased urgency with rapid advances in large language models.
Current regulations -- as well as the scientific literature -- provide little guidance on how to conduct these assessments.
Here we propose and investigate one approach for auditing algorithms: correspondence experiments.
arXiv Detail & Related papers (2024-04-03T22:01:26Z) - Survey of Computerized Adaptive Testing: A Machine Learning Perspective [66.26687542572974]
Computerized Adaptive Testing (CAT) provides an efficient and tailored method for assessing the proficiency of examinees.
This paper aims to provide a machine learning-focused survey on CAT, presenting a fresh perspective on this adaptive testing method.
arXiv Detail & Related papers (2024-03-31T15:09:47Z) - The Challenges of Machine Learning for Trust and Safety: A Case Study on Misinformation Detection [0.8057006406834466]
We examine the disconnect between scholarship and practice in applying machine learning to trust and safety problems.
We survey literature on automated detection of misinformation across a corpus of 248 well-cited papers in the field.
We conclude that the current state-of-the-art in fully-automated detection has limited efficacy in detecting human-generated misinformation.
arXiv Detail & Related papers (2023-08-23T15:52:20Z) - Integrated Educational Management Tool for Adamson University [0.0]
The developed system automates the processes of examination and student grading.
The developed system was tested in Adamson University and evaluated using the ISO 9126 software product evaluation criteria.
arXiv Detail & Related papers (2022-12-12T05:19:37Z) - Towards Informed Design and Validation Assistance in Computer Games
Using Imitation Learning [65.12226891589592]
This paper proposes a new approach to automated game validation and testing.
Our method leverages a data-driven imitation learning technique, which requires little effort and time and no knowledge of machine learning or programming.
arXiv Detail & Related papers (2022-08-15T11:08:44Z) - Fairness Testing: A Comprehensive Survey and Analysis of Trends [30.637712832450525]
Unfair behaviors of Machine Learning (ML) software have garnered increasing attention and concern among software engineers.
This paper offers a comprehensive survey of existing studies in this field.
arXiv Detail & Related papers (2022-07-20T22:41:38Z) - Metrics reloaded: Recommendations for image analysis validation [59.60445111432934]
Metrics Reloaded is a comprehensive framework guiding researchers in the problem-aware selection of metrics.
The framework was developed in a multi-stage Delphi process and is based on the novel concept of a problem fingerprint.
Based on the problem fingerprint, users are guided through the process of choosing and applying appropriate validation metrics.
arXiv Detail & Related papers (2022-06-03T15:56:51Z) - A Software Tool for Evaluating Unmanned Autonomous Systems [0.9786690381850356]
This paper presents an example of one such simulation-based technology tool, named as the Data-Driven Intelligent Prediction Tool (DIPT)
DIPT was developed for testing a multi-platform Unmanned Aerial Vehicle (UAV) system capable of conducting collaborative search missions.
arXiv Detail & Related papers (2021-11-21T18:17:57Z) - Scaling up Search Engine Audits: Practical Insights for Algorithm
Auditing [68.8204255655161]
We set up experiments for eight search engines with hundreds of virtual agents placed in different regions.
We demonstrate the successful performance of our research infrastructure across multiple data collections.
We conclude that virtual agents are a promising venue for monitoring the performance of algorithms across long periods of time.
arXiv Detail & Related papers (2021-06-10T15:49:58Z) - Quality meets Diversity: A Model-Agnostic Framework for Computerized
Adaptive Testing [60.38182654847399]
Computerized Adaptive Testing (CAT) is emerging as a promising testing application in many scenarios.
We propose a novel framework, namely Model-Agnostic Adaptive Testing (MAAT) for CAT solution.
arXiv Detail & Related papers (2021-01-15T06:48:50Z) - A Review of Uncertainty Quantification in Deep Learning: Techniques,
Applications and Challenges [76.20963684020145]
Uncertainty quantification (UQ) plays a pivotal role in reduction of uncertainties during both optimization and decision making processes.
Bizarre approximation and ensemble learning techniques are two most widely-used UQ methods in the literature.
This study reviews recent advances in UQ methods used in deep learning and investigates the application of these methods in reinforcement learning.
arXiv Detail & Related papers (2020-11-12T06:41:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.