Do Large Language Models Understand Verbal Indicators of Romantic Attraction?
- URL: http://arxiv.org/abs/2407.10989v1
- Date: Sun, 23 Jun 2024 17:50:30 GMT
- Title: Do Large Language Models Understand Verbal Indicators of Romantic Attraction?
- Authors: Sandra C. Matz, Heinrich Peters, Paul W. Eastwick, Moran Cerf, Eli J. Finkel,
- Abstract summary: We show that Large Language Models (LLMs) can detect romantic attraction during brief getting-to-know-you interactions.
We show that ChatGPT (and Claude 3) can predict both objective and subjective indicators of speed dating success.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: What makes people 'click' on a first date and become mutually attracted to one another? While understanding and predicting the dynamics of romantic interactions used to be exclusive to human judgment, we show that Large Language Models (LLMs) can detect romantic attraction during brief getting-to-know-you interactions. Examining data from 964 speed dates, we show that ChatGPT (and Claude 3) can predict both objective and subjective indicators of speed dating success (r=0.12-0.23). ChatGPT's predictions of actual matching (i.e., the exchange of contact information) were not only on par with those of human judges who had access to the same information but incremental to speed daters' own predictions. While some of the variance in ChatGPT's predictions can be explained by common content dimensions (such as the valence of the conversations) the fact that there remains a substantial proportion of unexplained variance suggests that ChatGPT also picks up on conversational dynamics. In addition, ChatGPT's judgments showed substantial overlap with those made by the human observers (mean r=0.29), highlighting similarities in their representation of romantic attraction that is, partially, independent of accuracy.
Related papers
- On the Role of Context in Reading Time Prediction [50.87306355705826]
We present a new perspective on how readers integrate context during real-time language comprehension.
Our proposals build on surprisal theory, which posits that the processing effort of a linguistic unit is an affine function of its in-context information content.
arXiv Detail & Related papers (2024-09-12T15:52:22Z) - Nonverbal Interaction Detection [83.40522919429337]
This work addresses a new challenge of understanding human nonverbal interaction in social contexts.
We contribute a novel large-scale dataset, called NVI, which is meticulously annotated to include bounding boxes for humans and corresponding social groups.
Second, we establish a new task NVI-DET for nonverbal interaction detection, which is formalized as identifying triplets in the form individual, group, interaction> from images.
Third, we propose a nonverbal interaction detection hypergraph (NVI-DEHR), a new approach that explicitly models high-order nonverbal interactions using hypergraphs.
arXiv Detail & Related papers (2024-07-11T02:14:06Z) - Large Language Models Can Infer Personality from Free-Form User Interactions [0.0]
GPT-4 can infer personality with moderate accuracy, outperforming previous approaches.
Results show that the direct focus on personality assessment did not result in a less positive user experience.
Preliminary analyses suggest that the accuracy of personality inferences varies only marginally across different socio-demographic subgroups.
arXiv Detail & Related papers (2024-05-19T20:33:36Z) - Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals [91.59906995214209]
We propose a new evaluation method, Counterfactual Attentiveness Test (CAT)
CAT uses counterfactuals by replacing part of the input with its counterpart from a different example, expecting an attentive model to change its prediction.
We show that GPT3 becomes less attentive with an increased number of demonstrations, while its accuracy on the test data improves.
arXiv Detail & Related papers (2023-11-16T06:27:35Z) - Humans and language models diverge when predicting repeating text [52.03471802608112]
We present a scenario in which the performance of humans and LMs diverges.
Human and GPT-2 LM predictions are strongly aligned in the first presentation of a text span, but their performance quickly diverges when memory begins to play a role.
We hope that this scenario will spur future work in bringing LMs closer to human behavior.
arXiv Detail & Related papers (2023-10-10T08:24:28Z) - ChatGPT Evaluation on Sentence Level Relations: A Focus on Temporal,
Causal, and Discourse Relations [52.26802326949116]
We quantitatively evaluate the performance of ChatGPT, an interactive large language model, on inter-sentential relations.
ChatGPT exhibits exceptional proficiency in detecting and reasoning about causal relations.
It is capable of identifying the majority of discourse relations with existing explicit discourse connectives, but the implicit discourse relation remains a formidable challenge.
arXiv Detail & Related papers (2023-04-28T13:14:36Z) - Understanding How People Rate Their Conversations [73.17730062864314]
We conduct a study to better understand how people rate their interactions with conversational agents.
We focus on agreeableness and extraversion as variables that may explain variation in ratings.
arXiv Detail & Related papers (2022-06-01T00:45:32Z) - "You made me feel this way": Investigating Partners' Influence in
Predicting Emotions in Couples' Conflict Interactions using Speech Data [3.618388731766687]
How romantic partners interact with each other during a conflict influences how they feel at the end of the interaction.
In this work, we used BERT to extract linguistic features (i.e., what partners said) and openSMILE to extract paralinguistic features (i.e., how they said it) from a data set of 368 German-speaking Swiss couples.
Based on those features, we trained machine learning models to predict if partners feel positive or negative after the conflict interaction.
arXiv Detail & Related papers (2021-06-03T01:15:41Z) - Shades of confusion: Lexical uncertainty modulates ad hoc coordination
in an interactive communication task [8.17947290421835]
We propose a communication task based on color-concept associations.
In Experiment 1, we establish several key properties of the mental representations of these expectations.
In Experiment 2, we examine the downstream consequences of these representations for communication.
arXiv Detail & Related papers (2021-05-13T20:42:28Z) - "where is this relationship going?": Understanding Relationship
Trajectories in Narrative Text [28.14874371042193]
Given a narrative describing a social interaction, systems make inferences about the underlying relationship trajectory.
We construct a new dataset, Social Narrative Tree, which consists of 1250 stories documenting a variety of daily social interactions.
arXiv Detail & Related papers (2020-10-29T02:07:05Z) - Learning Interactions and Relationships between Movie Characters [37.27773051465456]
We propose neural models to learn and jointly predict interactions, relationships, and the pair of characters that are involved.
Localizing the pair of interacting characters in video is a time-consuming process, instead, we train our model to learn from clip-level weak labels.
We evaluate our models on the MovieGraphs dataset and show the impact of modalities, use of longer temporal context for predicting relationships, and achieve encouraging performance using weak labels.
arXiv Detail & Related papers (2020-03-29T23:11:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.