Teaching MLOps in Higher Education through Project-Based Learning
- URL: http://arxiv.org/abs/2302.01048v1
- Date: Thu, 2 Feb 2023 12:22:30 GMT
- Title: Teaching MLOps in Higher Education through Project-Based Learning
- Authors: Filippo Lanubile, Silverio Mart\'inez-Fern\'andez, Luigi Quaranta
- Abstract summary: We present a project-based learning approach to teaching MLOps.
We examine the design of a course based on this approach.
We report on preliminary results from the first edition of the course.
- Score: 7.294965109944707
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Building and maintaining production-grade ML-enabled components is a complex
endeavor that goes beyond the current approach of academic education, focused
on the optimization of ML model performance in the lab. In this paper, we
present a project-based learning approach to teaching MLOps, focused on the
demonstration and experience with emerging practices and tools to automatize
the construction of ML-enabled components. We examine the design of a course
based on this approach, including laboratory sessions that cover the end-to-end
ML component life cycle, from model building to production deployment.
Moreover, we report on preliminary results from the first edition of the
course. During the present year, an updated version of the same course is being
delivered in two independent universities; the related learning outcomes will
be evaluated to analyze the effectiveness of project-based learning for this
specific subject.
Related papers
- Estimating the Effects of Sample Training Orders for Large Language Models without Retraining [49.59675538160363]
The order of training samples plays a crucial role in large language models (LLMs)<n>Traditional methods for investigating this effect generally require retraining the model with various sample orders.<n>We improve traditional methods by designing a retraining-free framework.
arXiv Detail & Related papers (2025-05-28T07:07:02Z) - Efficient Model Selection for Time Series Forecasting via LLMs [52.31535714387368]
We propose to leverage Large Language Models (LLMs) as a lightweight alternative for model selection.
Our method eliminates the need for explicit performance matrices by utilizing the inherent knowledge and reasoning capabilities of LLMs.
arXiv Detail & Related papers (2025-04-02T20:33:27Z) - Continuous Integration Practices in Machine Learning Projects: The Practitioners` Perspective [1.4165457606269516]
This study surveys 155 practitioners from 47 Machine Learning (ML) projects.
Practitioners highlighted eight key differences, including test complexity, infrastructure requirements, and build duration and stability.
Common challenges mentioned by practitioners include higher project complexity, model training demands, extensive data handling, increased computational resource needs, and dependency management.
arXiv Detail & Related papers (2025-02-24T18:01:50Z) - Investigating the Zone of Proximal Development of Language Models for In-Context Learning [59.91708683601029]
We introduce a learning analytics framework to analyze the in-context learning (ICL) behavior of large language models (LLMs)
We adapt the Zone of Proximal Development (ZPD) theory to ICL, measuring the ZPD of LLMs based on model performance on individual examples.
Our findings reveal a series of intricate and multifaceted behaviors of ICL, providing new insights into understanding and leveraging this technique.
arXiv Detail & Related papers (2025-02-10T19:36:21Z) - MME-Survey: A Comprehensive Survey on Evaluation of Multimodal LLMs [97.94579295913606]
Multimodal Large Language Models (MLLMs) have garnered increased attention from both industry and academia.
In the development process, evaluation is critical since it provides intuitive feedback and guidance on improving models.
This work aims to offer researchers an easy grasp of how to effectively evaluate MLLMs according to different needs and to inspire better evaluation methods.
arXiv Detail & Related papers (2024-11-22T18:59:54Z) - Quantifying the Effectiveness of Student Organization Activities using Natural Language Processing [0.0]
This research study aims to develop a machine learning workflow that will quantify the effectiveness of student-organized activities.
The study uses the Bidirectional Representations from Transformers (BERT) Large Language Model (LLM) called via the pysentimiento toolkit, as a Transformer pipeline in Hugging Face.
The results show that the BERT LLM can also be used effectively in analyzing sentiment beyond product reviews and post comments.
arXiv Detail & Related papers (2024-08-16T12:16:59Z) - Integrating HCI Datasets in Project-Based Machine Learning Courses: A College-Level Review and Case Study [0.7499722271664147]
This study explores the integration of real-world machine learning (ML) projects using human-computer interfaces (HCI) datasets in college-level courses.
arXiv Detail & Related papers (2024-08-06T23:05:15Z) - CoMMIT: Coordinated Instruction Tuning for Multimodal Large Language Models [68.64605538559312]
In this paper, we analyze the MLLM instruction tuning from both theoretical and empirical perspectives.
Inspired by our findings, we propose a measurement to quantitatively evaluate the learning balance.
In addition, we introduce an auxiliary loss regularization method to promote updating of the generation distribution of MLLMs.
arXiv Detail & Related papers (2024-07-29T23:18:55Z) - Evaluating Language Models for Generating and Judging Programming Feedback [4.743413681603463]
Large language models (LLMs) have transformed research and practice across a wide range of domains.
We evaluate the efficiency of open-source LLMs in generating high-quality feedback for programming assignments.
arXiv Detail & Related papers (2024-07-05T21:44:11Z) - Comprehensive Reassessment of Large-Scale Evaluation Outcomes in LLMs: A Multifaceted Statistical Approach [64.42462708687921]
Evaluations have revealed that factors such as scaling, training types, architectures and other factors profoundly impact the performance of LLMs.
Our study embarks on a thorough re-examination of these LLMs, targeting the inadequacies in current evaluation methods.
This includes the application of ANOVA, Tukey HSD tests, GAMM, and clustering technique.
arXiv Detail & Related papers (2024-03-22T14:47:35Z) - Evaluating and Optimizing Educational Content with Large Language Model Judgments [52.33701672559594]
We use Language Models (LMs) as educational experts to assess the impact of various instructions on learning outcomes.
We introduce an instruction optimization approach in which one LM generates instructional materials using the judgments of another LM as a reward function.
Human teachers' evaluations of these LM-generated worksheets show a significant alignment between the LM judgments and human teacher preferences.
arXiv Detail & Related papers (2024-03-05T09:09:15Z) - From Summary to Action: Enhancing Large Language Models for Complex
Tasks with Open World APIs [62.496139001509114]
We introduce a novel tool invocation pipeline designed to control massive real-world APIs.
This pipeline mirrors the human task-solving process, addressing complicated real-life user queries.
Empirical evaluations of our Sum2Act pipeline on the ToolBench benchmark show significant performance improvements.
arXiv Detail & Related papers (2024-02-28T08:42:23Z) - Exploring MLOps Dynamics: An Experimental Analysis in a Real-World
Machine Learning Project [0.0]
The experiment involves a comprehensive MLOps workflow, covering essential phases like problem definition, data acquisition, data preparation, model development, model deployment, monitoring, management, scalability, and governance and compliance.
A systematic tracking approach was employed to document revisits to specific phases from a main phase under focus, capturing the reasons for such revisits.
The resulting data provides visual representations of the MLOps process's interdependencies and iterative characteristics within the experimental framework.
arXiv Detail & Related papers (2023-07-22T10:33:19Z) - Iterative Forward Tuning Boosts In-Context Learning in Language Models [88.25013390669845]
In this study, we introduce a novel two-stage framework to boost in-context learning in large language models (LLMs)
Specifically, our framework delineates the ICL process into two distinct stages: Deep-Thinking and test stages.
The Deep-Thinking stage incorporates a unique attention mechanism, i.e., iterative enhanced attention, which enables multiple rounds of information accumulation.
arXiv Detail & Related papers (2023-05-22T13:18:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.