Seminar and Training Programs Recommender System for Faculty Members of
Higher Education Institution
- URL: http://arxiv.org/abs/2012.01167v1
- Date: Sat, 21 Nov 2020 00:53:51 GMT
- Title: Seminar and Training Programs Recommender System for Faculty Members of
Higher Education Institution
- Authors: Albert V. Paytaren
- Abstract summary: The researcher used the Descriptive Developmental Method of research to gather information relevant to the current problems and challenges encountered.
The level of acceptance of the developed system was evaluated by 24 faculty respondents.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study aims to develop a personalized Recommender System that helps to
address the problems encountered by the faculty members of Higher Education
Institutions in the selection of Seminar and Training Programs (STP). The
researcher used the Descriptive Developmental Method of research to gather
information relevant to the current problems and challenges encountered and
used these to develop software that addresses the identified challenges. For
the development of the software, the researcher adopted a step-wise approach
defined in the Incremental Developmental Model. The level of acceptance of the
developed system was evaluated by 24 faculty respondents. The level of
acceptance of the developed system was classified into functionality,
reliability, and usability and the study garnered an evaluation score of 4.65,
4.67, and 4.67 respectively. The overall interpretation of the results of the
evaluation is Highly Acceptable. The study created a system that provides
seminars and training program recommendations. The developed recommender system
was rated Highly Acceptable, respondents were very satisfied with the features
of the system and agreed that it was functional, reliable, and usable.
Related papers
- SUPER: Evaluating Agents on Setting Up and Executing Tasks from Research Repositories [55.161075901665946]
Super aims to capture the realistic challenges faced by researchers working with Machine Learning (ML) and Natural Language Processing (NLP) research repositories.
Our benchmark comprises three distinct problem sets: 45 end-to-end problems with annotated expert solutions, 152 sub problems derived from the expert set that focus on specific challenges, and 602 automatically generated problems for larger-scale development.
We show that state-of-the-art approaches struggle to solve these problems with the best model (GPT-4o) solving only 16.3% of the end-to-end set, and 46.1% of the scenarios.
arXiv Detail & Related papers (2024-09-11T17:37:48Z) - CURE4Rec: A Benchmark for Recommendation Unlearning with Deeper Influence [55.21518669075263]
CURE4Rec is the first comprehensive benchmark for recommendation unlearning evaluation.
We consider the deeper influence of unlearning on recommendation fairness and robustness towards data with varying impact levels.
arXiv Detail & Related papers (2024-08-26T16:21:50Z) - Could ChatGPT get an Engineering Degree? Evaluating Higher Education Vulnerability to AI Assistants [175.9723801486487]
We evaluate whether two AI assistants, GPT-3.5 and GPT-4, can adequately answer assessment questions.
GPT-4 answers an average of 65.8% of questions correctly, and can even produce the correct answer across at least one prompting strategy for 85.1% of questions.
Our results call for revising program-level assessment design in higher education in light of advances in generative AI.
arXiv Detail & Related papers (2024-08-07T12:11:49Z) - Evaluative Item-Contrastive Explanations in Rankings [47.24529321119513]
This paper advocates for the application of a specific form of Explainable AI -- namely, contrastive explanations -- as well-suited for addressing ranking problems.
The present work introduces Evaluative Item-Contrastive Explanations tailored for ranking systems and illustrates its application and characteristics through an experiment conducted on publicly available data.
arXiv Detail & Related papers (2023-12-14T15:40:51Z) - Towards a Success Model for Automated Programming Assessment Systems
Used as a Formative Assessment Tool [42.03652286907358]
The assessment of source code in university education is a central and important task for lecturers of programming courses.
The use of automated programming assessment systems (APASs) is a promising solution.
Measuring the effectiveness and success of APASs is crucial to understanding how such platforms should be designed, implemented, and used.
arXiv Detail & Related papers (2023-06-08T06:19:15Z) - A Domain-Agnostic Approach for Characterization of Lifelong Learning
Systems [128.63953314853327]
"Lifelong Learning" systems are capable of 1) Continuous Learning, 2) Transfer and Adaptation, and 3) Scalability.
We show that this suite of metrics can inform the development of varied and complex Lifelong Learning systems.
arXiv Detail & Related papers (2023-01-18T21:58:54Z) - Integrated Educational Management Tool for Adamson University [0.0]
The developed system automates the processes of examination and student grading.
The developed system was tested in Adamson University and evaluated using the ISO 9126 software product evaluation criteria.
arXiv Detail & Related papers (2022-12-12T05:19:37Z) - A Multicriteria Evaluation for Data-Driven Programming Feedback Systems:
Accuracy, Effectiveness, Fallibility, and Students' Response [7.167352606079407]
Data-driven programming feedback systems can help novices to program in the absence of a human tutor.
Prior evaluations showed that these systems improve learning in terms of test scores, or task completion efficiency.
These aspects include inherent fallibility of current state-of-the-art, students' programming behavior in response to correct/incorrect feedback, and effective/ineffective system components.
arXiv Detail & Related papers (2022-07-27T00:29:32Z) - Designing and Implementing e-School Systems: An Information Systems
Approach to School Management of a Community College in Northern Mindanao,
Philippines [0.0]
The School Management Information System has been designed and developed for a community college in Mindanao.
The project has been evaluated based on ISO 25010, a quality model used for product/software quality evaluation systems.
The overall quality and performance of the system was very good in terms of functionality, usability, and reliability.
arXiv Detail & Related papers (2021-09-01T05:53:35Z) - PRESENT: An Android-Based Class Attendance Monitoring System Using Face
Recognition Technology [0.0]
The researcher used incremental model as the software development process and the application was evaluated by seventeen (17) faculty members.
The respondents assessed the developed application as moderately acceptable in terms of functionality, reliability and usability.
With the integration of different technologies such as Android, face recognition and SMS, the traditional way of checking class attendance can be made easier, faster, reliable and secured.
arXiv Detail & Related papers (2020-11-20T02:25:00Z) - Opportunities of a Machine Learning-based Decision Support System for
Stroke Rehabilitation Assessment [64.52563354823711]
Rehabilitation assessment is critical to determine an adequate intervention for a patient.
Current practices of assessment mainly rely on therapist's experience, and assessment is infrequently executed due to the limited availability of a therapist.
We developed an intelligent decision support system that can identify salient features of assessment using reinforcement learning.
arXiv Detail & Related papers (2020-02-27T17:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.