A Collaborative Process Parameter Recommender System for Fleets of Networked Manufacturing Machines -- with Application to 3D Printing
- URL: http://arxiv.org/abs/2506.12252v1
- Date: Fri, 13 Jun 2025 21:56:53 GMT
- Title: A Collaborative Process Parameter Recommender System for Fleets of Networked Manufacturing Machines -- with Application to 3D Printing
- Authors: Weishi Wang, Sicong Guo, Chenhuan Jiang, Mohamed Elidrisi, Myungjin Lee, Harsha V. Madhyastha, Raed Al Kontar, Chinedum E. Okwudire,
- Abstract summary: 3D printing farms consist of multiple networked 3D printers operating in parallel.<n> optimizing process parameters across a fleet of manufacturing machines, even of the same type, remains a challenge due to machine-to-machine variability.<n>We introduce a machine learning-based collaborative recommender system that optimize process parameters for each machine in a fleet by modeling the problem as a sequential matrix completion task.
- Score: 4.886682562411186
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fleets of networked manufacturing machines of the same type, that are collocated or geographically distributed, are growing in popularity. An excellent example is the rise of 3D printing farms, which consist of multiple networked 3D printers operating in parallel, enabling faster production and efficient mass customization. However, optimizing process parameters across a fleet of manufacturing machines, even of the same type, remains a challenge due to machine-to-machine variability. Traditional trial-and-error approaches are inefficient, requiring extensive testing to determine optimal process parameters for an entire fleet. In this work, we introduce a machine learning-based collaborative recommender system that optimizes process parameters for each machine in a fleet by modeling the problem as a sequential matrix completion task. Our approach leverages spectral clustering and alternating least squares to iteratively refine parameter predictions, enabling real-time collaboration among the machines in a fleet while minimizing the number of experimental trials. We validate our method using a mini 3D printing farm consisting of ten 3D printers for which we optimize acceleration and speed settings to maximize print quality and productivity. Our approach achieves significantly faster convergence to optimal process parameters compared to non-collaborative matrix completion.
Related papers
- Eliminating Multi-GPU Performance Taxes: A Systems Approach to Efficient Distributed LLMs [61.953548065938385]
We introduce the ''Three Taxes'' (Bulk Synchronous, Inter- Kernel Data Locality, and Kernel Launch Overhead) as an analytical framework.<n>We propose moving beyond the rigid BSP model to address key inefficiencies in distributed GPU execution.<n>We observe a 10-20% speedup in end-to-end latency over BSP-based approaches.
arXiv Detail & Related papers (2025-11-04T01:15:44Z) - Leveraging Machine Learning and Enhanced Parallelism Detection for BPMN Model Generation from Text [75.77648333476776]
This paper introduces an automated pipeline for extracting BPMN models from text.<n>A key contribution of this work is the introduction of a newly annotated dataset.<n>We augment the dataset with 15 newly annotated documents containing 32 parallel gateways for model training.
arXiv Detail & Related papers (2025-07-11T07:25:55Z) - Don't be lazy: CompleteP enables compute-efficient deep transformers [50.85418589942566]
Some parameterizations fail to transfer optimal base HPs across changes in model depth.<n>We develop theory to show parameterizations may still exist in the lazy learning regime.<n>We identify and adopt the parameterization we call CompleteP that achieves both depth-wise HP transfer and non-lazy learning in all layers.
arXiv Detail & Related papers (2025-05-02T22:45:14Z) - Sample-Efficient Bayesian Transfer Learning for Online Machine Parameter Optimization [5.467297536043163]
This work introduces a method to optimize the machine parameters in the system itself using a Bayesian optimization algorithm.<n>By leveraging existing machine data, we use a transfer learning approach in order to identify an optimum with minimal iterations.<n>We validate our approach on a laser machine for cutting sheet metal in the real world.
arXiv Detail & Related papers (2025-03-20T08:08:17Z) - Controllable Prompt Tuning For Balancing Group Distributional Robustness [53.336515056479705]
We introduce an optimization scheme to achieve good performance across groups and find a good solution for all without severely sacrificing performance on any of them.
We propose Controllable Prompt Tuning (CPT), which couples our approach with prompt-tuning techniques.
On spurious correlation benchmarks, our procedures achieve state-of-the-art results across both transformer and non-transformer architectures, as well as unimodal and multimodal data.
arXiv Detail & Related papers (2024-03-05T06:23:55Z) - Towards General and Efficient Online Tuning for Spark [55.30868031221838]
We present a general and efficient Spark tuning framework that can deal with the three issues simultaneously.
We have implemented this framework as an independent cloud service, and applied it to the data platform in Tencent.
arXiv Detail & Related papers (2023-09-05T02:16:45Z) - Parameter-efficient Tuning of Large-scale Multimodal Foundation Model [68.24510810095802]
We propose A graceful prompt framework for cross-modal transfer (Aurora) to overcome these challenges.
Considering the redundancy in existing architectures, we first utilize the mode approximation to generate 0.1M trainable parameters to implement the multimodal prompt tuning.
A thorough evaluation on six cross-modal benchmarks shows that it not only outperforms the state-of-the-art but even outperforms the full fine-tuning approach.
arXiv Detail & Related papers (2023-05-15T06:40:56Z) - Energy-efficient Task Adaptation for NLP Edge Inference Leveraging
Heterogeneous Memory Architectures [68.91874045918112]
adapter-ALBERT is an efficient model optimization for maximal data reuse across different tasks.
We demonstrate the advantage of mapping the model to a heterogeneous on-chip memory architecture by performing simulations on a validated NLP edge accelerator.
arXiv Detail & Related papers (2023-03-25T14:40:59Z) - Multi-Agent Reinforcement Learning for Microprocessor Design Space
Exploration [71.95914457415624]
Microprocessor architects are increasingly resorting to domain-specific customization in the quest for high-performance and energy-efficiency.
We propose an alternative formulation that leverages Multi-Agent RL (MARL) to tackle this problem.
Our evaluation shows that the MARL formulation consistently outperforms single-agent RL baselines.
arXiv Detail & Related papers (2022-11-29T17:10:24Z) - Asynchronous Decentralized Bayesian Optimization for Large Scale
Hyperparameter Optimization [13.89136187674851]
In BO, a computationally cheap surrogate model is employed to learn the relationship between parameter configurations and their performance.
We present an asynchronous-decentralized BO, wherein each worker runs a sequential BO and asynchronously communicates its results through shared storage.
We scale our method without loss of computational efficiency with above 95% of worker's utilization to 1,920 parallel workers.
arXiv Detail & Related papers (2022-07-01T15:07:56Z) - Surrogate Modelling for Injection Molding Processes using Machine
Learning [0.23090185577016442]
Injection molding is one of the most popular manufacturing methods for the modeling of complex plastic objects.
We propose a baseline for a data processing pipeline that includes the extraction of data from Moldflow simulation projects.
We evaluate machine learning models for fill time and deflection distribution prediction and provide baseline values of MSE and RMSE metrics.
arXiv Detail & Related papers (2021-07-30T12:13:52Z) - Experimental Investigation and Evaluation of Model-based Hyperparameter
Optimization [0.3058685580689604]
This article presents an overview of theoretical and practical results for popular machine learning algorithms.
The R package mlr is used as a uniform interface to the machine learning models.
arXiv Detail & Related papers (2021-07-19T11:37:37Z) - Hyperparameter Optimization via Sequential Uniform Designs [4.56877715768796]
This paper reformulates HPO as a computer experiment and proposes a novel sequential uniform design (SeqUD) strategy with three-fold advantages.
The proposed SeqUD strategy outperforms benchmark HPO methods, and it can be therefore a promising and competitive alternative to existing AutoML tools.
arXiv Detail & Related papers (2020-09-08T08:55:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.