Creating user stereotypes for persona development from qualitative data
through semi-automatic subspace clustering
- URL: http://arxiv.org/abs/2306.14551v1
- Date: Mon, 26 Jun 2023 09:49:51 GMT
- Title: Creating user stereotypes for persona development from qualitative data
through semi-automatic subspace clustering
- Authors: Dannie Korsgaard, Thomas Bjorner, Pernille Krog Sorensen, Paolo
Burelli
- Abstract summary: We propose a method that employs the modelling of user stereotypes to automate part of the persona creation process.
Results show that manual techniques differ between human persona designers leading to different results.
The proposed algorithm provides similar results based on parameter input, but was more rigorous and will find optimal clusters.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personas are models of users that incorporate motivations, wishes, and
objectives; These models are employed in user-centred design to help design
better user experiences and have recently been employed in adaptive systems to
help tailor the personalized user experience. Designing with personas involves
the production of descriptions of fictitious users, which are often based on
data from real users. The majority of data-driven persona development performed
today is based on qualitative data from a limited set of interviewees and
transformed into personas using labour-intensive manual techniques. In this
study, we propose a method that employs the modelling of user stereotypes to
automate part of the persona creation process and addresses the drawbacks of
the existing semi-automated methods for persona development. The description of
the method is accompanied by an empirical comparison with a manual technique
and a semi-automated alternative (multiple correspondence analysis). The
results of the comparison show that manual techniques differ between human
persona designers leading to different results. The proposed algorithm provides
similar results based on parameter input, but was more rigorous and will find
optimal clusters, while lowering the labour associated with finding the
clusters in the dataset. The output of the method also represents the largest
variances in the dataset identified by the multiple correspondence analysis.
Related papers
- Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration [90.41908331897639]
Large language models (LLMs) have significantly benefited from training on diverse, high-quality task-specific data.
We present a novel approach, ReverseGen, designed to automatically generate effective training samples.
arXiv Detail & Related papers (2024-10-22T06:43:28Z) - Towards Personalized Federated Learning via Heterogeneous Model
Reassembly [84.44268421053043]
pFedHR is a framework that leverages heterogeneous model reassembly to achieve personalized federated learning.
pFedHR dynamically generates diverse personalized models in an automated manner.
arXiv Detail & Related papers (2023-08-16T19:36:01Z) - Amortised Experimental Design and Parameter Estimation for User Models
of Pointing [5.076871870091048]
We show how experiments can be designed so as to gather data and infer parameters as efficiently as possible.
We train a policy for choosing experimental designs with simulated participants.
Our solution learns which experiments provide the most useful data for parameter estimation by interacting with in-silico agents sampled from the model space.
arXiv Detail & Related papers (2023-07-19T10:17:35Z) - Assisting Human Decisions in Document Matching [52.79491990823573]
We devise a proxy matching task that allows us to evaluate which kinds of assistive information improve decision makers' performance.
We find that providing black-box model explanations reduces users' accuracy on the matching task.
On the other hand, custom methods that are designed to closely attend to some task-specific desiderata are found to be effective in improving user performance.
arXiv Detail & Related papers (2023-02-16T17:45:20Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - A Data-Driven Method for Automated Data Superposition with Applications
in Soft Matter Science [0.0]
We develop a data-driven, non-parametric method for superposing experimental data with arbitrary coordinate transformations.
Our method produces interpretable data-driven models that may inform applications such as materials classification, design, and discovery.
arXiv Detail & Related papers (2022-04-20T14:58:04Z) - UserIdentifier: Implicit User Representations for Simple and Effective
Personalized Sentiment Analysis [36.162520010250056]
We propose UserIdentifier, a novel scheme for training a single shared model for all users.
Our approach produces personalized responses by adding fixed, non-trainable user identifiers to the input data.
arXiv Detail & Related papers (2021-10-01T00:21:33Z) - Combining Feature and Instance Attribution to Detect Artifacts [62.63504976810927]
We propose methods to facilitate identification of training data artifacts.
We show that this proposed training-feature attribution approach can be used to uncover artifacts in training data.
We execute a small user study to evaluate whether these methods are useful to NLP researchers in practice.
arXiv Detail & Related papers (2021-07-01T09:26:13Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z) - Human or Machine: Automating Human Likeliness Evaluation of NLG Texts [0.0]
We propose to use a human likeliness score that shows the percentage of the output samples from a method that look as if they were written by a human.
As follow up, we plan to perform an empirical analysis of human-written and machine-generated texts to find the optimal setup of this evaluation approach.
arXiv Detail & Related papers (2020-06-05T00:57:52Z) - Large-scale Hybrid Approach for Predicting User Satisfaction with
Conversational Agents [28.668681892786264]
Measuring user satisfaction level is a challenging task, and a critical component in developing large-scale conversational agent systems.
Human annotation based approaches are easier to control, but hard to scale.
A novel alternative approach is to collect user's direct feedback via a feedback elicitation system embedded to the conversational agent system.
arXiv Detail & Related papers (2020-05-29T16:29:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.