Entity Aware Modelling: A Survey
- URL: http://arxiv.org/abs/2302.08406v1
- Date: Thu, 16 Feb 2023 16:33:33 GMT
- Title: Entity Aware Modelling: A Survey
- Authors: Rahul Ghosh, Haoyu Yang, Ankush Khandelwal, Erhu He, Arvind
Renganathan, Somya Sharma, Xiaowei Jia and Vipin Kumar
- Abstract summary: Recent machine learning advances have led to new state-of-the-art response prediction models.
Models built at a population level often lead to sub-optimal performance in many personalized prediction settings.
In personalized prediction, the goal is to incorporate inherent characteristics of different entities to improve prediction performance.
- Score: 22.32009539611539
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Personalized prediction of responses for individual entities caused by
external drivers is vital across many disciplines. Recent machine learning (ML)
advances have led to new state-of-the-art response prediction models. Models
built at a population level often lead to sub-optimal performance in many
personalized prediction settings due to heterogeneity in data across entities
(tasks). In personalized prediction, the goal is to incorporate inherent
characteristics of different entities to improve prediction performance. In
this survey, we focus on the recent developments in the ML community for such
entity-aware modeling approaches. ML algorithms often modulate the network
using these entity characteristics when they are readily available. However,
these entity characteristics are not readily available in many real-world
scenarios, and different ML methods have been proposed to infer these
characteristics from the data. In this survey, we have organized the current
literature on entity-aware modeling based on the availability of these
characteristics as well as the amount of training data. We highlight how recent
innovations in other disciplines, such as uncertainty quantification, fairness,
and knowledge-guided machine learning, can improve entity-aware modeling.
Related papers
- Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration [90.41908331897639]
Large language models (LLMs) have significantly benefited from training on diverse, high-quality task-specific data.
We present a novel approach, ReverseGen, designed to automatically generate effective training samples.
arXiv Detail & Related papers (2024-10-22T06:43:28Z) - LLM-Select: Feature Selection with Large Language Models [64.5099482021597]
Large language models (LLMs) are capable of selecting the most predictive features, with performance rivaling the standard tools of data science.
Our findings suggest that LLMs may be useful not only for selecting the best features for training but also for deciding which features to collect in the first place.
arXiv Detail & Related papers (2024-07-02T22:23:40Z) - IGANN Sparse: Bridging Sparsity and Interpretability with Non-linear Insight [4.010646933005848]
IGANN Sparse is a novel machine learning model from the family of generalized additive models.
It promotes sparsity through a non-linear feature selection process during training.
This ensures interpretability through improved model sparsity without sacrificing predictive performance.
arXiv Detail & Related papers (2024-03-17T22:44:36Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - Evaluating Explainability in Machine Learning Predictions through Explainer-Agnostic Metrics [0.0]
We develop six distinct model-agnostic metrics designed to quantify the extent to which model predictions can be explained.
These metrics measure different aspects of model explainability, ranging from local importance, global importance, and surrogate predictions.
We demonstrate the practical utility of these metrics on classification and regression tasks, and integrate these metrics into an existing Python package for public use.
arXiv Detail & Related papers (2023-02-23T15:28:36Z) - A prediction and behavioural analysis of machine learning methods for
modelling travel mode choice [0.26249027950824505]
We conduct a systematic comparison of different modelling approaches, across multiple modelling problems, in terms of the key factors likely to affect model choice.
Results indicate that the models with the highest disaggregate predictive performance provide poorer estimates of behavioural indicators and aggregate mode shares.
It is also observed that the MNL model performs robustly in a variety of situations, though ML techniques can improve the estimates of behavioural indices such as Willingness to Pay.
arXiv Detail & Related papers (2023-01-11T11:10:32Z) - Explainable Artificial Intelligence for Improved Modeling of Processes [6.29494485203591]
We evaluate the capability of modern Transformer architectures and more classical Machine Learning technologies of modeling process regularities.
We show that the ML models are capable of predicting critical outcomes and that the attention mechanisms or XAI components offer new insights into the underlying processes.
arXiv Detail & Related papers (2022-12-01T17:56:24Z) - Synthetic Model Combination: An Instance-wise Approach to Unsupervised
Ensemble Learning [92.89846887298852]
Consider making a prediction over new test data without any opportunity to learn from a training set of labelled data.
Give access to a set of expert models and their predictions alongside some limited information about the dataset used to train them.
arXiv Detail & Related papers (2022-10-11T10:20:31Z) - GAM(e) changer or not? An evaluation of interpretable machine learning
models based on additive model constraints [5.783415024516947]
This paper investigates a series of intrinsically interpretable machine learning models.
We evaluate the prediction qualities of five GAMs as compared to six traditional ML models.
arXiv Detail & Related papers (2022-04-19T20:37:31Z) - Models, Pixels, and Rewards: Evaluating Design Trade-offs in Visual
Model-Based Reinforcement Learning [109.74041512359476]
We study a number of design decisions for the predictive model in visual MBRL algorithms.
We find that a range of design decisions that are often considered crucial, such as the use of latent spaces, have little effect on task performance.
We show how this phenomenon is related to exploration and how some of the lower-scoring models on standard benchmarks will perform the same as the best-performing models when trained on the same training data.
arXiv Detail & Related papers (2020-12-08T18:03:21Z) - Rethinking Generalization of Neural Models: A Named Entity Recognition
Case Study [81.11161697133095]
We take the NER task as a testbed to analyze the generalization behavior of existing models from different perspectives.
Experiments with in-depth analyses diagnose the bottleneck of existing neural NER models.
As a by-product of this paper, we have open-sourced a project that involves a comprehensive summary of recent NER papers.
arXiv Detail & Related papers (2020-01-12T04:33:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.