PsychFM: Predicting your next gamble
- URL: http://arxiv.org/abs/2007.01833v1
- Date: Fri, 3 Jul 2020 17:41:14 GMT
- Title: PsychFM: Predicting your next gamble
- Authors: Prakash Rajan, Krishna P. Miyapuram
- Abstract summary: Most of the human behavior itself can be modeled into a choice prediction problem.
Since the behavior is person dependent, there is a need to build a model that predicts choices on a per-person basis.
A novel hybrid model namely psychological factorisation machine (PsychFM) has been proposed that involves concepts from machine learning as well as psychological theories.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is a sudden surge to model human behavior due to its vast and diverse
applications which includes modeling public policies, economic behavior and
consumer behavior. Most of the human behavior itself can be modeled into a
choice prediction problem. Prospect theory is a theoretical model that tries to
explain the anomalies in choice prediction. These theories perform well in
terms of explaining the anomalies but they lack precision. Since the behavior
is person dependent, there is a need to build a model that predicts choices on
a per-person basis. Looking on at the average persons choice may not
necessarily throw light on a particular person's choice. Modeling the gambling
problem on a per person basis will help in recommendation systems and related
areas. A novel hybrid model namely psychological factorisation machine (
PsychFM ) has been proposed that involves concepts from machine learning as
well as psychological theories. It outperforms the popular existing models
namely random forest and factorisation machines for the benchmark dataset
CPC-18. Finally,the efficacy of the proposed hybrid model has been verified by
comparing with the existing models.
Related papers
- Predictive Churn with the Set of Good Models [64.05949860750235]
We study the effect of conflicting predictions over the set of near-optimal machine learning models.
We present theoretical results on the expected churn between models within the Rashomon set.
We show how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications.
arXiv Detail & Related papers (2024-02-12T16:15:25Z) - Limitations of Agents Simulated by Predictive Models [1.6649383443094403]
We outline two structural reasons for why predictive models can fail when turned into agents.
We show that both of those failures are fixed by including a feedback loop from the environment.
Our treatment provides a unifying view of those failure modes, and informs the question of why fine-tuning offline learned policies with online learning makes them more effective.
arXiv Detail & Related papers (2024-02-08T17:08:08Z) - Human Trajectory Forecasting with Explainable Behavioral Uncertainty [63.62824628085961]
Human trajectory forecasting helps to understand and predict human behaviors, enabling applications from social robots to self-driving cars.
Model-free methods offer superior prediction accuracy but lack explainability, while model-based methods provide explainability but cannot predict well.
We show that BNSP-SFM achieves up to a 50% improvement in prediction accuracy, compared with 11 state-of-the-art methods.
arXiv Detail & Related papers (2023-07-04T16:45:21Z) - How to select predictive models for causal inference? [0.0]
We show that classic machine-learning model selection does not select the best outcome models for causal inference.
We outline a good causal model-selection procedure: using the so-called $Rtext-risk$; using flexible estimators to compute the nuisance models on the train set.
arXiv Detail & Related papers (2023-02-01T10:58:55Z) - A prediction and behavioural analysis of machine learning methods for
modelling travel mode choice [0.26249027950824505]
We conduct a systematic comparison of different modelling approaches, across multiple modelling problems, in terms of the key factors likely to affect model choice.
Results indicate that the models with the highest disaggregate predictive performance provide poorer estimates of behavioural indicators and aggregate mode shares.
It is also observed that the MNL model performs robustly in a variety of situations, though ML techniques can improve the estimates of behavioural indices such as Willingness to Pay.
arXiv Detail & Related papers (2023-01-11T11:10:32Z) - Synthetic Model Combination: An Instance-wise Approach to Unsupervised
Ensemble Learning [92.89846887298852]
Consider making a prediction over new test data without any opportunity to learn from a training set of labelled data.
Give access to a set of expert models and their predictions alongside some limited information about the dataset used to train them.
arXiv Detail & Related papers (2022-10-11T10:20:31Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Are Visual Explanations Useful? A Case Study in Model-in-the-Loop
Prediction [49.254162397086006]
We study explanations based on visual saliency in an image-based age prediction task.
We find that presenting model predictions improves human accuracy.
However, explanations of various kinds fail to significantly alter human accuracy or trust in the model.
arXiv Detail & Related papers (2020-07-23T20:39:40Z) - Learning Opinion Dynamics From Social Traces [25.161493874783584]
We propose an inference mechanism for fitting a generative, agent-like model of opinion dynamics to real-world social traces.
We showcase our proposal by translating a classical agent-based model of opinion dynamics into its generative counterpart.
We apply our model to real-world data from Reddit to explore the long-standing question about the impact of backfire effect.
arXiv Detail & Related papers (2020-06-02T14:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.