Generative Discrete Event Process Simulation for Hidden Markov Models to Predict Competitor Time-to-Market
- URL: http://arxiv.org/abs/2411.04266v1
- Date: Wed, 06 Nov 2024 21:17:38 GMT
- Title: Generative Discrete Event Process Simulation for Hidden Markov Models to Predict Competitor Time-to-Market
- Authors: Nandakishore Santhi, Stephan Eidenbenz, Brian Key, George Tompkins,
- Abstract summary: We show how Firm A can build a model that predicts when Firm B will be ready to sell its product.
We study the question of how many resource observations Firm A requires in order to accurately assess the current state of development at Firm B.
- Score: 0.0
- License:
- Abstract: We study the challenge of predicting the time at which a competitor product, such as a novel high-capacity EV battery or a new car model, will be available to customers; as new information is obtained, this time-to-market estimate is revised. Our scenario is as follows: We assume that the product is under development at a Firm B, which is a competitor to Firm A; as they are in the same industry, Firm A has a relatively good understanding of the processes and steps required to produce the product. While Firm B tries to keep its activities hidden (think of stealth-mode for start-ups), Firm A is nevertheless able to gain periodic insights by observing what type of resources Firm B is using. We show how Firm A can build a model that predicts when Firm B will be ready to sell its product; the model leverages knowledge of the underlying processes and required resources to build a Parallel Discrete Simulation (PDES)-based process model that it then uses as a generative model to train a Hidden Markov Model (HMM). We study the question of how many resource observations Firm A requires in order to accurately assess the current state of development at Firm B. In order to gain general insights into the capabilities of this approach, we study the effect of different process graph densities, different densities of the resource-activity maps, etc., and also scaling properties as we increase the number resource counts. We find that in most cases, the HMM achieves a prediction accuracy of 70 to 80 percent after 20 (daily) observations of a production process that lasts 150 days on average and we characterize the effects of different problem instance densities on this prediction accuracy. Our results give insight into the level of market knowledge required for accurate and early time-to-market prediction.
Related papers
- Context is Key: A Benchmark for Forecasting with Essential Textual Information [87.3175915185287]
"Context is Key" (CiK) is a time series forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context.
We evaluate a range of approaches, including statistical models, time series foundation models, and LLM-based forecasters.
Our experiments highlight the importance of incorporating contextual information, demonstrate surprising performance when using LLM-based forecasting models, and also reveal some of their critical shortcomings.
arXiv Detail & Related papers (2024-10-24T17:56:08Z) - F-FOMAML: GNN-Enhanced Meta-Learning for Peak Period Demand Forecasting with Proxy Data [65.6499834212641]
We formulate the demand prediction as a meta-learning problem and develop the Feature-based First-Order Model-Agnostic Meta-Learning (F-FOMAML) algorithm.
By considering domain similarities through task-specific metadata, our model improved generalization, where the excess risk decreases as the number of training tasks increases.
Compared to existing state-of-the-art models, our method demonstrates a notable improvement in demand prediction accuracy, reducing the Mean Absolute Error by 26.24% on an internal vending machine dataset and by 1.04% on the publicly accessible JD.com dataset.
arXiv Detail & Related papers (2024-06-23T21:28:50Z) - Optimizing Sales Forecasts through Automated Integration of Market Indicators [0.0]
This work investigates the potential of data-driven techniques to automatically select and integrate market indicators for improving customer demand predictions.
By adopting an exploratory methodology, we integrate macroeconomic time series, such as national GDP growth, into textitNeural Prophet and textitSARIMAX forecasting models.
It could be shown that forecasts can be significantly enhanced by incorporating external information.
arXiv Detail & Related papers (2024-05-15T08:11:41Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - Startup success prediction and VC portfolio simulation using CrunchBase
data [1.7897779505837144]
This paper focuses on startups at their Series B and Series C investment stages, aiming to predict key success milestones.
We introduce novel deep learning model for predicting startup success, integrating a variety of factors such as funding metrics, founder features, industry category.
Our work demonstrates the considerable promise of deep learning models and alternative unstructured data in predicting startup success.
arXiv Detail & Related papers (2023-09-27T10:22:37Z) - Can I Trust My Simulation Model? Measuring the Quality of Business
Process Simulation Models [1.4027589547318842]
Business Process Simulation (BPS) is an approach to analyze the performance of business processes under different scenarios.
We propose a collection of measures to evaluate the quality of a BPS model.
arXiv Detail & Related papers (2023-03-30T15:40:26Z) - Efficient Model-based Multi-agent Reinforcement Learning via Optimistic
Equilibrium Computation [93.52573037053449]
H-MARL (Hallucinated Multi-Agent Reinforcement Learning) learns successful equilibrium policies after a few interactions with the environment.
We demonstrate our approach experimentally on an autonomous driving simulation benchmark.
arXiv Detail & Related papers (2022-03-14T17:24:03Z) - Machine Learning Classification Methods and Portfolio Allocation: An
Examination of Market Efficiency [3.3343612552681945]
We design a novel framework to examine market efficiency through out-of-sample (OOS) predictability.
We frame the asset pricing problem as a machine learning classification problem and construct classification models to predict return states.
The prediction-based portfolios beat the market with significant OOS economic gains.
arXiv Detail & Related papers (2021-08-04T20:48:27Z) - Back2Future: Leveraging Backfill Dynamics for Improving Real-time
Predictions in Future [73.03458424369657]
In real-time forecasting in public health, data collection is a non-trivial and demanding task.
'Backfill' phenomenon and its effect on model performance has been barely studied in the prior literature.
We formulate a novel problem and neural framework Back2Future that aims to refine a given model's predictions in real-time.
arXiv Detail & Related papers (2021-06-08T14:48:20Z) - Models, Pixels, and Rewards: Evaluating Design Trade-offs in Visual
Model-Based Reinforcement Learning [109.74041512359476]
We study a number of design decisions for the predictive model in visual MBRL algorithms.
We find that a range of design decisions that are often considered crucial, such as the use of latent spaces, have little effect on task performance.
We show how this phenomenon is related to exploration and how some of the lower-scoring models on standard benchmarks will perform the same as the best-performing models when trained on the same training data.
arXiv Detail & Related papers (2020-12-08T18:03:21Z) - A Time Series Analysis-Based Stock Price Prediction Using Machine
Learning and Deep Learning Models [0.0]
We present a very robust and accurate framework of stock price prediction that consists of an agglomeration of statistical, machine learning and deep learning models.
We use the daily stock price data, collected at five minutes interval of time, of a very well known company that is listed in the National Stock Exchange (NSE) of India.
We contend that the agglomerative approach of model building that uses a combination of statistical, machine learning, and deep learning approaches, can very effectively learn from the volatile and random movement patterns in a stock price data.
arXiv Detail & Related papers (2020-04-17T19:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.