Beyond Our Behavior: The GDPR and Humanistic Personalization
- URL: http://arxiv.org/abs/2008.13404v1
- Date: Mon, 31 Aug 2020 07:40:09 GMT
- Title: Beyond Our Behavior: The GDPR and Humanistic Personalization
- Authors: Travis Greene, Galit Shmueli
- Abstract summary: We propose a new paradigm of humanistic personalization.
We re-frame distinction between implicit and explicit data collection as one of nonconscious ("organismic") behavior and conscious ("reflective") action.
We discuss how an emphasis on narrative accuracy can reduce opportunities for data done done of injustice subjects.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Personalization should take the human person seriously. This requires a
deeper understanding of how recommender systems can shape both our
self-understanding and identity. We unpack key European humanistic and
philosophical ideas underlying the General Data Protection Regulation (GDPR)
and propose a new paradigm of humanistic personalization. Humanistic
personalization responds to the IEEE's call for Ethically Aligned Design (EAD)
and is based on fundamental human capacities and values. Humanistic
personalization focuses on narrative accuracy: the subjective fit between a
person's self-narrative and both the input (personal data) and output of a
recommender system. In doing so, we re-frame the distinction between implicit
and explicit data collection as one of nonconscious ("organismic") behavior and
conscious ("reflective") action. This distinction raises important ethical and
interpretive issues related to agency, self-understanding, and political
participation. Finally, we discuss how an emphasis on narrative accuracy can
reduce opportunities for epistemic injustice done to data subjects.
Related papers
- Democratizing Reward Design for Personal and Representative Value-Alignment [10.1630183955549]
We introduce Interactive-Reflective Dialogue Alignment, a method that iteratively engages users in reflecting on and specifying their subjective value definitions.
This system learns individual value definitions through language-model-based preference elicitation and constructs personalized reward models.
Our findings demonstrate diverse definitions of value-aligned behaviour and show that our system can accurately capture each person's unique understanding.
arXiv Detail & Related papers (2024-10-29T16:37:01Z) - Can Language Models Reason about Individualistic Human Values and Preferences? [44.249817353449146]
We study language models (LMs) on the specific challenge of individualistic value reasoning.
We reveal critical limitations in frontier LMs' abilities to reason about individualistic human values with accuracies between 55% to 65%.
We also identify a partiality of LMs in reasoning about global individualistic values, as measured by our proposed Value Inequity Index (sigmaINEQUITY)
arXiv Detail & Related papers (2024-10-04T19:03:41Z) - Personality Alignment of Large Language Models [26.071445846818914]
Current methods for aligning large language models (LLMs) typically aim to reflect general human values and behaviors.
We introduce the concept of Personality Alignment.
This approach tailors LLMs' responses and decisions to match the specific preferences of individual users or closely related groups.
arXiv Detail & Related papers (2024-08-21T17:09:00Z) - Aligning Large Language Models with Human Opinions through Persona Selection and Value--Belief--Norm Reasoning [67.33899440998175]
Chain-of-Opinion (COO) is a simple four-step solution modeling which and how to reason with personae.
COO distinguishes between explicit personae (demographics and ideology) and implicit personae (historical opinions)
COO efficiently achieves new state-of-the-art opinion prediction via prompting with only 5 inference calls, improving prior techniques by up to 4%.
arXiv Detail & Related papers (2023-11-14T18:48:27Z) - Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona
Biases in Dialogue Systems [103.416202777731]
We study "persona biases", which we define to be the sensitivity of dialogue models' harmful behaviors contingent upon the personas they adopt.
We categorize persona biases into biases in harmful expression and harmful agreement, and establish a comprehensive evaluation framework to measure persona biases in five aspects: Offensiveness, Toxic Continuation, Regard, Stereotype Agreement, and Toxic Agreement.
arXiv Detail & Related papers (2023-10-08T21:03:18Z) - Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties [68.66719970507273]
Value pluralism is the view that multiple correct values may be held in tension with one another.
As statistical learners, AI systems fit to averages by default, washing out potentially irreducible value conflicts.
We introduce ValuePrism, a large-scale dataset of 218k values, rights, and duties connected to 31k human-written situations.
arXiv Detail & Related papers (2023-09-02T01:24:59Z) - Improving Personality Consistency in Conversation by Persona Extending [22.124187337032946]
We propose a novel retrieval-to-prediction paradigm consisting of two subcomponents, namely, Persona Retrieval Model (PRM) and Posterior-scored Transformer (PS-Transformer)
Our proposed model yields considerable improvements in both automatic metrics and human evaluations.
arXiv Detail & Related papers (2022-08-23T09:00:58Z) - SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles [50.90773979394264]
This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
arXiv Detail & Related papers (2022-04-11T14:42:54Z) - Exosoul: ethical profiling in the digital world [3.6245424131171813]
The project Exosoul aims at developing a personalized software exoskeleton which mediates actions in the digital world according to the moral preferences of the user.
The approach is hybrid, first based on the identification of profiles in a top-down manner, and then on the refinement of profiles by a personalized data-driven approach.
We consider the correlations between ethics positions (idealism and relativism) personality traits (honesty/humility, conscientiousness, Machiavellianism and narcissism) and worldview (normativism)
arXiv Detail & Related papers (2022-03-30T10:54:00Z) - You Impress Me: Dialogue Generation via Mutual Persona Perception [62.89449096369027]
The research in cognitive science suggests that understanding is an essential signal for a high-quality chit-chat conversation.
Motivated by this, we propose P2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding.
arXiv Detail & Related papers (2020-04-11T12:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.