An enhanced simulation-based multi-objective optimization approach with
knowledge discovery for reconfigurable manufacturing systems
- URL: http://arxiv.org/abs/2212.00581v1
- Date: Wed, 30 Nov 2022 10:30:07 GMT
- Title: An enhanced simulation-based multi-objective optimization approach with
knowledge discovery for reconfigurable manufacturing systems
- Authors: Carlos Alberto Barrera-Diaz, Amir Nourmohammdi, Henrik Smedberg,
Tehseen Aslam, Amos H.C. Ng
- Abstract summary: This study addresses work tasks and resource allocations to workstations together with buffer capacity allocation in RMS.
The aim is to simultaneously maximize and minimize total buffer capacity under production volumes and capacity changes.
An enhanced simulation-based multi-objective optimization (SMO) approach with customized simulation and optimization components is proposed.
- Score: 0.6824747267214372
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In today's uncertain and competitive market, where enterprises are subjected
to increasingly shortened product life-cycles and frequent volume changes,
reconfigurable manufacturing systems (RMS) applications play a significant role
in the manufacturing industry's success. Despite the advantages offered by RMS,
achieving a high-efficiency degree constitutes a challenging task for
stakeholders and decision-makers when they face the trade-off decisions
inherent in these complex systems. This study addresses work tasks and resource
allocations to workstations together with buffer capacity allocation in RMS.
The aim is to simultaneously maximize throughput and minimize total buffer
capacity under fluctuating production volumes and capacity changes while
considering the stochastic behavior of the system. An enhanced simulation-based
multi-objective optimization (SMO) approach with customized simulation and
optimization components is proposed to address the abovementioned challenges.
Apart from presenting the optimal solutions subject to volume and capacity
changes, the proposed approach support decision-makers with discovered
knowledge to further understand the RMS design. In particular, this study
presents a problem-specific customized SMO combined with a novel flexible
pattern mining method for optimizing RMS and conducting post-optimal analyzes.
To this extent, this study demonstrates the benefits of applying SMO and
knowledge discovery methods for fast decision-support and production planning
of RMS.
Related papers
- EVOLvE: Evaluating and Optimizing LLMs For Exploration [76.66831821738927]
Large language models (LLMs) remain under-studied in scenarios requiring optimal decision-making under uncertainty.
We measure LLMs' (in)ability to make optimal decisions in bandits, a state-less reinforcement learning setting relevant to many applications.
Motivated by the existence of optimal exploration algorithms, we propose efficient ways to integrate this algorithmic knowledge into LLMs.
arXiv Detail & Related papers (2024-10-08T17:54:03Z) - Cognitive LLMs: Towards Integrating Cognitive Architectures and Large Language Models for Manufacturing Decision-making [51.737762570776006]
LLM-ACTR is a novel neuro-symbolic architecture that provides human-aligned and versatile decision-making.
Our framework extracts and embeds knowledge of ACT-R's internal decision-making process as latent neural representations.
Our experiments on novel Design for Manufacturing tasks show both improved task performance as well as improved grounded decision-making capability.
arXiv Detail & Related papers (2024-08-17T11:49:53Z) - MR-Ben: A Meta-Reasoning Benchmark for Evaluating System-2 Thinking in LLMs [55.20845457594977]
Large language models (LLMs) have shown increasing capability in problem-solving and decision-making.
We present a process-based benchmark MR-Ben that demands a meta-reasoning skill.
Our meta-reasoning paradigm is especially suited for system-2 slow thinking.
arXiv Detail & Related papers (2024-06-20T03:50:23Z) - Large Language Model Agent as a Mechanical Designer [7.136205674624813]
In this study, we present a novel approach that integrates pre-trained LLMs with a FEM module.
The FEM module evaluates each design and provides essential feedback, guiding the LLMs to continuously learn, plan, generate, and optimize designs without the need for domain-specific training.
Our results reveal that these LLM-based agents can successfully generate truss designs that comply with natural language specifications with a success rate of up to 90%, which varies according to the applied constraints.
arXiv Detail & Related papers (2024-04-26T16:41:24Z) - RLEMMO: Evolutionary Multimodal Optimization Assisted By Deep Reinforcement Learning [8.389454219309837]
multimodal optimization problems (MMOP) requires finding all optimal solutions, which is challenging in limited function evaluations.
We propose RLEMMO, a Meta-Black-Box Optimization framework, which maintains a population of solutions and incorporates a reinforcement learning agent.
With a novel reward mechanism that encourages both quality and diversity, RLEMMO can be effectively trained using a policy gradient algorithm.
arXiv Detail & Related papers (2024-04-12T05:02:49Z) - Let's reward step by step: Step-Level reward model as the Navigators for
Reasoning [64.27898739929734]
Process-Supervised Reward Model (PRM) furnishes LLMs with step-by-step feedback during the training phase.
We propose a greedy search algorithm that employs the step-level feedback from PRM to optimize the reasoning pathways explored by LLMs.
To explore the versatility of our approach, we develop a novel method to automatically generate step-level reward dataset for coding tasks and observed similar improved performance in the code generation tasks.
arXiv Detail & Related papers (2023-10-16T05:21:50Z) - Multiple Independent DE Optimizations to Tackle Uncertainty and
Variability in Demand in Inventory Management [0.0]
This study aims to discern the most effective strategy for minimizing inventory costs within the context of uncertain demand patterns.
To find the optimal solution, the study focuses on meta-heuristic approaches and compares multiple algorithms.
arXiv Detail & Related papers (2023-09-22T13:15:02Z) - Distributional Reinforcement Learning for Scheduling of (Bio)chemical
Production Processes [0.0]
Reinforcement Learning (RL) has recently received significant attention from the process systems engineering and control communities.
We present a RL methodology to address precedence and disjunctive constraints as commonly imposed on production scheduling problems.
arXiv Detail & Related papers (2022-03-01T17:25:40Z) - Sequential Information Design: Markov Persuasion Process and Its
Efficient Reinforcement Learning [156.5667417159582]
This paper proposes a novel model of sequential information design, namely the Markov persuasion processes (MPPs)
Planning in MPPs faces the unique challenge in finding a signaling policy that is simultaneously persuasive to the myopic receivers and inducing the optimal long-term cumulative utilities of the sender.
We design a provably efficient no-regret learning algorithm, the Optimism-Pessimism Principle for Persuasion Process (OP4), which features a novel combination of both optimism and pessimism principles.
arXiv Detail & Related papers (2022-02-22T05:41:43Z) - Learning Optimization Proxies for Large-Scale Security-Constrained
Economic Dispatch [11.475805963049808]
Security-Constrained Economic Dispatch (SCED) is a fundamental optimization model for Transmission System Operators (TSO)
This paper proposes to learn an optimization proxy for SCED, i.e., a Machine Learning (ML) model that can predict an optimal solution for SCED in milliseconds.
Numerical experiments are reported on the French transmission system, and demonstrate the approach's ability to produce, within a time frame that is compatible with real-time operations.
arXiv Detail & Related papers (2021-12-27T00:44:06Z) - Reinforcement Learning for Adaptive Mesh Refinement [63.7867809197671]
We propose a novel formulation of AMR as a Markov decision process and apply deep reinforcement learning to train refinement policies directly from simulation.
The model sizes of these policy architectures are independent of the mesh size and hence scale to arbitrarily large and complex simulations.
arXiv Detail & Related papers (2021-03-01T22:55:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.